i use below command , but it doesn't work, we must use --ip to pass ip to pass given ip to docker
docker run -p 80:80080 --ip xx.xx.xx.xx jenkins
finnally i have resloved with add an environment parameter with docker command
-e JENKINS_OPTS="--httpPort=80"
docker run -p 80:8080 -d jenkins/jenkins:latest
docker run --name jenkinsci -p 8081:8080 jenkins/jenkins:lts
If 8080 is already in use, you can use 8081 but forward it to 8080, as jenkins starts on 8080
-p 8081:8080
Related
I use docker install gitlab.
Step 1:
docker run -d
-p 8023:443
-p 8020:80
-p 8022:22
--name gitlab
--restart always
-v /home/gitlab/config:/etc/gitlab
-v /home/gitlab/logs:/var/log/gitlab
-v /home/gitlab/data:/var/opt/gitlab
gitlab/gitlab-ce
step 2:
vi /home/gitlab/config/gitlab.rb
external_url 'http://192.168.71.5'
gitlab_rails['gitlab_ssh_host'] = '192.168.71.5'
gitlab_rails['gitlab_shell_ssh_port'] = 8022
step 3:
docker exec -it gitlab /bin/bash
gitlab-ctl reconfigure
docker restart gitlab
When I add a new project new-test in gitlab.
Then open http://192.168.71.5:8020/root/new-test with chrome.
The Clone with HTTP is http://192.168.71.5/root/new-test.git.
when use git clone http://192.168.71.5/root/new-test.git. There is something wrong.
fatal: unable to access 'http://192.168.71.5/root/new-test.git/': Failed connect to 192.168.71.5:80; Connection refused
enter image description here
why the Clone with HTTP is not http://192.168.71.5:8022/root/new-test.git?
-p 8023:443
-p 8020:80
-p 8022:22
should be
docker run -d
-p 443:8023
-p 80:8020
-p 22:8022
You have to change the value of 'external_url' in gitlab.rb to include the portnumber. This will also be reflected in the http(s) clone url.
Note that in the last scentence of your question you use port 8022 (which is the ssh port). This probably should be 8020.
Pay attention that when changing the value of 'external_url', this will cause nginx to start listening on this same port. So you have to configure the ports of your docker container like the following:
-p 8023:443
-p 8020:8020
-p 8022:22
I installed Docker Local Registry as below
docker pull registry
after
docker run -d -p 5001:5001 -v C:/localhub/registry:/var/lib/registry --restart=always --name hub.local registry
because of 5000 port using another application.
but i can't reach to
http://localhost:5001/v2/_catalog
The first part of the -p value is the host port and the second part is the port within the container.
This code runs the registry on port 5001
docker run -d -p 5001:5000 --name hub.local registry
If you want to change the port the registry listens on within the container, you must use this code
docker run -d -e REGISTRY_HTTP_ADDR=0.0.0.0:5001 -p 5001:5001 --name hub.local registry
'''
docker run -d -p 5001:5000 -v C:/localhub/registry:/var/lib/registry --restart=always --name hub.local registry
'''
Keep the internal port the same and change only your local port
I want to run docker inside another docker container. My main container is running in a virtualbox of OS Ubuntu 18.04 which is there on my Windows 10. On trying to run it, it is showing me as:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
How can I resolve this issue?
Yes, you can do this. Check for dind (docker in docker) on docker webpage how to achieve it: https://hub.docker.com/_/docker
Your error indicates that either dockerd in the top level container is not running or you didn't mount docker.sock on the dependent container to communicate with dockerd running on your top-level container.
I am running electric-flow in a docker container in my Ubuntu virtual-box using this docker command: docker run --name efserver --hostname=efserver -d -p 8080:8080 -p 9990:9990 -p 7800:7800 -p 7070:80 -p 443:443 -p 8443:8443 -p 8200:8200 -i -t ecdocker/eflow-ce. Inside this docker container, I want to install and run docker so that my CI/CD pipeline in electric-flow can access and use docker commands.
From your above description, ecdocker/eflow-ce is your CI/CD solution container, and you just want to use docker command in this container, then you did not need dind solution. You can just access to a container's host docker server.
Something like follows:
docker run --privileged --name efserver --hostname=efserver -d -p 8080:8080 -p 9990:9990 -p 7800:7800 -p 7070:80 -p 443:443 -p 8443:8443 -p 8200:8200 -v $(which docker):/usr/bin/docker -v /var/run/docker.sock:/var/run/docker.sock -i -t ecdocker/eflow-ce
Compared to your old command:
Add --privileged
Add -v $(which docker):/usr/bin/docker, then you can use docker client in container.
Add -v /var/run/docker.sock:/var/run/docker.sock, then you can access host's docker daemon using client in container.
Could I expose different docker container points to the same HTTP port on the host?
Example
docker container run --publish 80:80 -d -it --name wp wordpress
docker container run --publish 90:80 -d -it --name ci jenkins
docker container run --publish 100:80 -d -it --name gitlab gitlab/gitlab-ce
With that commands you are not using the same port at host. The nomenclature for -pis "hostPort:containerPort" so in that way you are mapping container's port 80 from all of them to your host at ports 80, 90 and 100. So no conflict at all.
Anyway, to answer to your question about possible conflicting. In first instance, your commands should be:
docker container run --publish 80:80 -d -it --name wp wordpress
docker container run --publish 80:80 -d -it --name ci jenkins
docker container run --publish 80:80 -d -it --name gitlab gitlab/gitlab-ce
In this way, you can do that commands but you'll probably get an error saying Bind for 0.0.0.0:80 failed: port is already allocated..
Anyway, in the hypothetical case of docker allowing that without an error...
The first one you map is which is going to work because on "docker run" command there are iptables commands for openning ports from container to host, and iptables rules work in "first matching is which works" style. So you'll have 3 iptables rules in this case but the one is going to work is the first.
So I have 3 ports that should be exposed to the machine's interface. Is it possible to do this with a Docker container?
To expose just one port, this is what you need to do:
docker run -p <host_port>:<container_port>
To expose multiple ports, simply provide multiple -p arguments:
docker run -p <host_port1>:<container_port1> -p <host_port2>:<container_port2>
Step1
In your Dockerfile, you can use the verb EXPOSE to expose multiple ports.
e.g.
EXPOSE 3000 80 443 22
Step2
You then would like to build an new image based on above Dockerfile.
e.g.
docker build -t foo:tag .
Step3
Then you can use the -p to map host port with the container port, as defined in above EXPOSE of Dockerfile.
e.g.
docker run -p 3001:3000 -p 23:22
In case you would like to expose a range of continuous ports, you can run docker like this:
docker run -it -p 7100-7120:7100-7120/tcp
if you use docker-compose.ymlfile:
services:
varnish:
ports:
- 80
- 6081
You can also specify the host/network port as HOST/NETWORK_PORT:CONTAINER_PORT
varnish:
ports:
- 81:80
- 6081:6081
Use this as an example:
docker create --name new_ubuntu -it -p 8080:8080 -p 15672:15672 -p 5432:5432 ubuntu:latest bash
look what you've created(and copy its CONTAINER ID xxxxx):
docker ps -a
now write the miracle maker word(start):
docker start xxxxx
good luck
Only one point to add. you have the option to specify a range of ports to expose in the dockerfile and when running it:
on dockerfile:
EXPOSE 8888-8898
Build image:
docker build -t <image_name>:<version> -f dockerfile .
When running the image:
docker run -it -p 8888-8898:8888-8898 -v C:\x\x\x:/app <image_name>:<version>
If you are creating a container from an image and like to expose multiple ports (not publish) you can use the following command:
docker create --name `container name` --expose 7000 --expose 7001 `image name`
Now, when you start this container using the docker start command, the configured ports above will be exposed.