I have a simple docker container which runs just fine on my local machine. I was hoping to find an easy checklist how I could publish and run my docker container on cPanel any help , i used centos 7 server
(iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 127.0.0.1 --dport 80 -j DNAT --to-destination 172.17.0.2:8888 ! -i docker0: iptables: No chain/target/match by that name.
and port not be defined
Yes you can install docker over cPanel/WHM just like installing it on any other CentOS server/virtual machine.
Just follow these simple steps (as root):
1) yum install -y yum-utils device-mapper-persistent-data lvm2 (these should be already installed...)
2) yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
3) yum install docker-ce
4) enable docker at boot (systemctl enable docker)
5) start docker service (systemctl start docker)
The guide above is for CentOS 7.x. Don't expect to find any references or options related to Docker in the WHM interface. You will be able to control docker via command line from a SSH shell.
I have some docker containers already running on my cPanel/WHM server and I have no issues with them. I basically use them for caching, proxying and other similar stuff.
And as long as you follow these instructions, you won't mess-up any of your cPanel/WHM services/settings or current cPanel accounts/settings/sites/emails etc.
See references Here
Adding onto Tiago's comment
Docker is now installed using docker.io not docker-ce
So skip step 2 and modify step 3
Related
I' m a beginner in the Docker;
I have pulled a CentOS 7 image from Hub and ran it ;
I need to ssh in to the docker container(CentOS 7) from my host.
Got the docker container's IP using docker inspect container-id
I have installed the following using
initscripts
systemd.x86_64
systemd-libs.x86_64
open-ssh
firewalld
net-tools
when i tried to start the firewall to open the port for ssh(22)
[root#a6f3e3eb095c ~]# systemctl start firewall
Failed to get D-Bus connection: Operation not permitted
Also tried,
[root#a6f3e3eb095c ~]# /usr/lib/systemd/systemd --system &
[1] 353
[root#a6f3e3eb095c ~]# systemd 219 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ -LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN)
Detected virtualization xen.
Detected architecture x86-64.
Welcome to CentOS Linux 7 (Core)!
Set hostname to <a6f3e3eb095c>.
Cannot determine cgroup we are running in: No such file or directory
Failed to allocate manager object: No such file or directory
[1]+ Exit 1 /usr/lib/systemd/systemd --system
How to start the firewall/ssh inside the docker container ?
inside docker container run following commands :
yum update -y glibc-common
yum install -y sudo passwd openssh-server openssh-clients tar screen crontabs strace telnet perl libpcap bc patch ntp dnsmasq unzip pax which
rpm -Uvh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
yum install -y hiera lsyncd sshpass rng-tools
service sshd start;
sed -i 's/UsePAM yes/#UsePAM yes/g' /etc/ssh/sshd_config;
sed -i 's/#UsePAM no/UsePAM no/g' /etc/ssh/sshd_config;
sed -i 's/#PermitRootLogin yes/PermitRootLogin yes/' /etc/ssh/sshd_config;
sed -i 's/enabled=0/enabled=1/' /etc/yum.repos.d/CentOS-Base.repo
mkdir -p /root/.ssh/;
rm -f /var/lib/rpm/.rpm.lock;
echo "StrictHostKeyChecking=no" > /root/.ssh/config;
echo "UserKnownHostsFile=/dev/null" >> /root/.ssh/config
echo "root:password" | chpasswd
( or )
Simply you can pull docker image of centos with ssh in docker hub
https://hub.docker.com/search/?isAutomated=0&isOfficial=0&page=1&pullCount=0&q=centos+ssh&starCount=0
https://hub.docker.com/r/kinogmt/centos-ssh/
https://hub.docker.com/r/jdeathe/centos-ssh/
You can avoid the "Failed to get D-Bus connection: Operation not permitted" / aka installing systemd inside a docker by using the https://github.com/gdraheim/docker-systemctl-replacement ... after that the docker-exec stuff should be all fine to do things inside a container.
If you really do need an ssh or sftp container, then you can use my Docker Image as a source image for your own or run it directly:
If using the official CentOS-7 Image and you require systemd, there are instructions on how to enable it under the section "Systemd integration".
However, based on the following:
I need to ssh in to the docker container(CentOS 7) from my host.
You can use docker exec to run commands in a running, (backgrounded), container so, for images that have bash available, you can access an interactive tty and run bash as follows from your host - where container can be either the name or id:
docker exec --tty --interactive <container> bash
OR
docker exec -ti <container> bash
Finally, it's unlikely to be necessary to install the firewall package in your image as the operator will decide what ports to publish from those which are exposed and you can make use of Docker Networking to only expose the necessary public facing services.
If you are using the Docker CLI, then you can get into the Docker container using the following command
docker exec -it containerId bash
I am not sure how to ssh into the docker container, but if you want to do basic operation inside the Docker container, you can make use of the above docker command.
I can't figure out how to use my containers after they are running in Docker (through docker-compose).
I've been reading up on Docker for the last few weeks and I'm interested in building an environment that I could develop on that and I could replicate efficiently to collaborate with my colleagues.
I've read through the following articles:
https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-compose-on-centos-7
https://www.digitalocean.com/community/tutorials/how-to-install-wordpress-and-phpmyadmin-with-docker-compose-on-ubuntu-14-04
https://www.digitalocean.com/community/tutorials/how-to-work-with-docker-data-volumes-on-ubuntu-14-04
I've installed Docker and Docker Compose through a Bash script made up of the commands found in the previous articles: (Run as sudo)
## Install Docker package
function docker_install {
echo "__ installing docker"
# Run installer script
wget -qO- https://get.docker.com/ | sh
# Add user parameter to docker group
usermod -aG docker $1
# Enable docker on boot
sudo systemctl enable docker.service
# Start docker now
sudo systemctl start docker.service
}
## Install docker-compose package
function docker-compose_install {
echo "__ installing docker-compose"
# Update yum repo
sudo yum install epel-release
# Install pip with "yes" flag
sudo yum install -y python-pip
# Install SSL Extension
sudo pip install backports.ssl_match_hostname --upgrade
# Install docker-compose through pip
sudo pip install docker-compose
# Upgrade python
sudo yum upgrade python*
}
# Call Functions
docker_install
docker-compose_install
And I have a docker-compose.yml file for my Docker images. (For now I'm only pulling PHP and Apache as a proof-of-concept, I will include MySQL once can get PHP and Apache working together)
php:
image: php
links:
- httpd
httpd:
image: httpd
ports:
- 80:80
I call a sudo docker-compose up -d and I don't receive any errors:
Creating containerwrap_httpd_1
Creating containerwrap_php_1
Any my question is:
When I run a php -v and service httpd status after the images are running I receive:
-bash: php: command not found
and
● httd.service
Loaded: not-found (Reason: No such file or directory)
Active: inactive (dead)
If the images are running why am I not able to use PHP or see the status of Apache? How am I supposed to utilize my PHP and Apache containers?
Update
I've asked a more informed version of this question again.
What you are missing is that containers are like different machines. Running the commands at your droplet, you will not see anything running in those machines. You need to connect to them. One easy way is to use something:
docker exec -it CONTAINER_ID /bin/bash. This will give you access to each containers bash.
For your Apache server that would be containerwrap_httpd_1 and it will always change unless you have your docker-compose.yaml set up to use a constant name each time the service is created and started. Of course you can either curl localhost or open a browser and browse to your Droplet's IP address, provided it forwards http on the default port(or use the forwarding port). This will have the effect to see the Apache's default web page, because you have set up the export 80:80 rule for the service.
Seems you have some misunderstanding of Docker's concepts. You can image that each docker container as a lightweight Virtual Machine(of course Docker container is different with real VM). So basically after you created php and httpd containers, these php and httpd commands only available in the containers' bash. You cannot perform these commands in your host, because your host is a different machine from your containers. If you want to attach to the container bash, check out this command exec. You should be able to run php or httpd commands in the containers' bash.
If you want connect to your php container from your host, you can try docker run -p parameter, which will publish a container's port(s) to the host.
Or you want connect your php and httpd containers together, you should consider to read docker's network documents to figure out how to use link or docker network.
I'm trying to access an ActiveMQ instance on my local machine from inside a docker container also running on my machine. AMQ is listening on 0.0.0.0:61616. I tried to configure my program running in the container to use the ip address of docker0 as well as enp6s0, both didn't work.
If I however use the --net=host option it suddenly works, no matter which ip address I use. The problem is that I can't use the option in production as the code that starts the container doesn't support this. So if it's not possible to change the default network in the Dockerfile, I have to fix this in a different way.
EDIT: My Dockerfile
FROM java:8-jre
RUN mkdir -p /JCloudService
COPY ./0.4.6-SNAPSHOT-SHADED/ /JCloudService
RUN apt-get update && apt-get install netcat -y && apt-get install nano
WORKDIR /JCloudService
CMD set -x; /bin/sh -c '/JCloudService/bin/JCloudScaleService'
And the run command: docker run -it jcs:latest. With this command it doesn't work. Only if I add --net=host
--net=host works because it tells Docker to put your container in the same networking stack as your host machine.
To connect to a service running on your machine you need the ip of your host machine on the docker0 network. So ip addr show docker0 on your host, then you should be able to use that IP and 61616 to connect to the host from within the container.
I'm building an image for github's Linkurious project, based on an image already in the hub for the neo4j database. the neo image automatically runs the server on port 7474 and my image runs on port 8000.
when I run my image I publish both ports (could I do this with EXPOSE?):
docker run -d --publish=7474:7474 --publish=8000:8000 linkurious
but only my server seems to run. if I hit http://[ip]:7474/ I get nothing. is there something special I have to do to make sure they both run?
* Edit I *
here's my Dockerfile:
FROM neo4j/neo4j:latest
RUN apt-get -y update
RUN apt-get install -y git
RUN apt-get install -y npm
RUN apt-get install -y nodejs-legacy
RUN git clone git://github.com/Linkurious/linkurious.js.git
RUN cd linkurious.js && npm install && npm run build
CMD cd linkurious.js && npm start
* Edit II *
to perhaps help explain my quandary, I've asked a different question
EXPOSE is there to allow inter-containers communication (within the same docker daemon), with the docker run --link option.
Port mapping is there to map EXPOSEd ports to the host, to allow client-to-container communication. So you need --publish.
See also "Difference between “expose” and “publish” in docker".
See also an example with "Advanced Usecase with Docker: Connecting Containers"
Make sure though that the ip is the right one ($(docker-machine ip default)).
If you are using a VM (meaning, you are not using docker directly on a Linux host, but on a Linux VM with VirtualBox), make sure the mapped ports 7474 and 8000 are port forwarded from the host to the VM.
VBoxManage controlvm boot2docker-vm natpf1 "name,tcp,,7474,,7474"
VBoxManage controlvm boot2docker-vm natpf1 "name,tcp,,8000,,8000"
In the OP's case, this is using neo4j: see "Neo4j with Docker", based on the neo4j/neo4j/ image and its Dockerfile:
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["neo4j"]
It is not meant to be used for installing another service (like nodejs), where the CMD cd linkurious.js && npm start would completely override the neo4j base image CMD (meaning neo4j would never start).
It is meant to be run on its own:
# interactive with terminal
docker run -i -t --rm --name neo4j -v $HOME/neo4j-data:/data -p 8474:7474 neo4j/neo4j
# as daemon running in the background
docker run -d --name neo4j -v $HOME/neo4j-data:/data -p 8474:7474 neo4j/neo4j
And then used by another image, with a --link neo4j:neo4j directive.
I have a very basic doubt regarding docker.
I have a docker host installed in ubuntuA.
So, to test this from the client(UbuntuB) , should the docker be installed in UbuntuB machine also?
More correct answer is "only docker client" need to be installed in UbuntuB
In UbuntuB, install docker client only, it is around 17M
# apt-get update && apt-get install -y curl
# curl https://get.docker.io/builds/Linux/x86_64/docker-latest -o /usr/local/bin/docker
# chmod +x /usr/local/bin/docker
In order to run docker command, you need talk to daemon in ubuntuA (port 2375 is used since docker 1.0)
$ docker -H tcp://ubuntuA:2375 images
or
$ export DOCKER_HOST tcp://ubuntuA:2375
$ docker images
see more detail in http://docs.docker.com/articles/basics/
Yes,
you have to install docker on both client and server.