As part of a school challenge I need to run a Jenkins environment using Docker on port 7070:9090.
I'm trying to change the default access port for Jenkins (8080) on a Docker container unsuccessfully.
Here's my code:
version: '3'
services:
jenkins:
container_name: jenkins
image: jenkins-image
ports:
- "7070:8080"
volumes:
- "jenkins_home:/var/jenkins_home"
- "/var/run/docker.sock:/var/run/docker.sock"
I managed to change the localhost to 7070, but not the default access port from 8080.
All the tutorials I've found online only explain how to change the localhost.
Any advice on how to change the port 8080 and still manage to have Jenkins running?
Access port is related with Docker instead of Jenkins. The syntax should be like this HOST:CONTAINER if the Jenkins is running at 7070 in your container following code needs to work for you.
version: '3'
services:
jenkins:
container_name: jenkins
image: jenkins-image
ports:
- "8080:7070"
volumes:
- "jenkins_home:/var/jenkins_home"
- "/var/run/docker.sock:/var/run/docker.sock"
Related
I have:
NiFi running in docker container. I'm running NiFi via Docker Compose with the following config:
version: '3'
services:
nifi:
image: apache/nifi:latest
container_name: nifi
ports:
- "8443:8443"
volumes:
- ./database_repository:/opt/nifi/nifi-current/database_repository
- ./flowfile_repository:/opt/nifi/nifi-current/flowfile_repository
- ./content_repository:/opt/nifi/nifi-current/content_repository
- ./provenance_repository:/opt/nifi/nifi-current/provenance_repository
- ./state:/opt/nifi/nifi-current/state
- ./logs:/opt/nifi/nifi-current/logs
- ./conf:/opt/nifi/nifi-current/conf
restart: always
FTP server located on localhost (outside the container)
In the NiFi route i'm using a GetFTP processor which tries unsuccessfully to connect to localhost:21
I understand that the problem is that localhost is not available, because container is isolated. What and how to configure in the docker-compose.yml configuration to solve the problem?
I have a service, some-service, that needs to make http requests to a Jenkins service - both running in separate Docker containers. My issue is that whenever I make a request, my connection is refused.
Both some-service and Jenkins are running on ports 3030 and 4040 with host names some-service and jenkins, respectively.
I can hit Jenkins successfully on my local machine outside of some-service with:
curl -v http://localhost:4040/
However, I cannot reach Jenkins from inside some-service using:
curl -v http://jenkins:4040/
I'm using this simple Docker-compose.yaml file to create both some-service and Jenkins:
version: '3'
services:
some-service:
container_name: service
image: service:latest
hostname: some-service
build:
context: service/
dockerfile: Dockerfile
environment:
GET_HOSTS_FROM: dns
networks:
- eg-net
ports:
- 3030:3030
depends_on:
- jenkins
links:
- jenkins
labels:
kompose.service.type: LoadBalancer
jenkins:
container_name: jenkins
image: jenkinsci/blueocean
restart: always
hostname: jenkins
networks:
- eg-net
ports:
- 4040:8080
volumes:
- ./jenkins-data:/var/jenkins_home
networks:
eg-net:
driver: bridge
You can't access http://jenkins:4040/ from within your service because port 4040 is exposed only to the host machine. Thats why curl -v http://localhost:4040/ on your host machine works.
If you want to access jenkins from within another container you have to use the port 8080 because this port is exposed within the network. So curl -v http://jenkins:8080/ from within your service will work.
Hope this will clarify it.
Hi i'm start a docker container using docker-compose, but when i try to use localhost to connect I can't connect. Here is the docker-compose i'm using:
version: '3.3'
services:
standalone:
image: apachepulsar/pulsar
expose:
- 8080
- 6650
environment:
- PULSAR_MEM=" -Xms512m -Xmx512m -XX:MaxDirectMemorySize=1g"
command: >
/bin/bash -c
"bin/apply-config-from-env.py conf/standalone.conf
&& bin/pulsar standalone"
Im using windows 10
Be aware that expose, as the documentation suggest:
Expose ports without publishing them to the host machine - they’ll only be accessible to linked services. Only the internal port can be specified.
My guess is that you instead want to publish them and let them be available to the host. To do so:
services:
standalone:
image: apachepulsar/pulsar
ports:
- "8080:8080"
- "6650:6650"
I am trying to access a docker container from another container using localhost address.
The compose file is pretty simple. Both containers ports are exposed.
There are no problems when building.
In my host machine I can successfully execute curl http://localhost:8124/ and get a response.
But inside the django_container when trying the same command I get Connection refused error.
I tried adding them in the same network, still result didn't change.
Well if I try to execute with the internal ip of that container like curl 'http://172.27.0.2:8123/' I get the response.
Is this the default behavior? How can I reach clickhouse_container using localhost?
version: '3'
services:
django:
container_name: django_container
build: ./django
ports:
- "8007:8000"
links:
- clickhouse:clickhouse
volumes:
- ./django:/usr/src/run
command: bash /usr/src/run/run.sh
clickhouse:
container_name: clickhouse_container
build: ./clickhouse
ports:
- "9001:9000"
- "8124:8123"
- "9010:9009"
So with this line here - "8124:8123" you're mapping the port of clickhouse container to localhost 8124. Which allows you to access clickhouse from localhost at port 8124.
If you want to hit clickhouse container from within the dockerhost network you have to use the hostname for the container. This is what I like to do:
version: '3'
services:
django:
hostname: djano
container_name: django
build: ./django
ports:
- "8007:8000"
links:
- clickhouse:clickhouse
volumes:
- ./django:/usr/src/run
command: bash /usr/src/run/run.sh
clickhouse:
hostname: clickhouse
container_name: clickhouse
build: ./clickhouse
ports:
- "9001:9000"
- "8124:8123"
- "9010:9009"
If you make the changes like I have made above you should be able to access clickhouse from within the django container like this curl http://clickhouse:8123.
As in #Billy Ferguson's answer, you can visit using localhost in host machine just because: you define a port mapping to route localhost:8124 to clickhouse:8123.
But when from other container(django), you can't. But if you insist, there is a ugly workaround: share host's network namespace with network_mode, but with this the django container will just share all network of host.
services:
django:
hostname: djano
container_name: django
build: ./django
ports:
- "8007:8000"
links:
- clickhouse:clickhouse
volumes:
- ./django:/usr/src/run
command: bash /usr/src/run/run.sh
network_mode: "host"
It depends of config.xml settings. If in config.xml <listen_host> 0.0.0.0</listen_host> you can use clickhouse-client -h your_ip --port 9001
I have a docker compose file that defines a service that will run my application and a service that that application is dependent on to run:
services:
frontend:
build:
context: .
volumes:
- "../.:/opt/app"
ports:
- "8080:8080"
links:
- redis
image: node
command: ['yarn', 'start']
redis:
image: redis
expose:
- "6379"
For development this compose file exposes 8080 so that I can access the running code from a browser.
In jenkins however I can't expose that port as then two jobs running simultaneously would conflict trying to bind to the same port on jenkins.
Is there a way to prevent docker-compose from binding service ports? Like an inverse of the --service-ports flag?
For context:
In jenkins I run tests using docker-compose run frontend yarn test which won't map ports and so isn't a problem.
The issue presents when I try to run end to end browser tests against the application. I use a container to run CodeceptJS tests against a running instance of the app. In that case I need the frontend to start before I run the tests, as they will fail if the app is not up.
Q. Is there a way to prevent docker-compose from binding service ports?
It has no sense to prevent something that you are asking to do. docker-compose will start stuff as the docker-compose.yml file indicates.
I propose duplicate the frontend service using extends::
version: "2"
services:
frontend-base:
build:
context: .
volumes:
- "../.:/opt/app"
image: node
command: ['yarn', 'start']
frontend:
extends: frontend-base
links:
- redis
ports:
- "8080:8080"
frontend-test:
extends: frontend-base
links:
- redis
command: ['yarn', 'test']
redis:
image: redis
expose:
- "6379"
So use it as this:
docker-compose run frontend # in dev environment
docker-compose run frontend-test # in jenkins
Note that extends: is not available in version: "3", but they will bring it back again in the future.
For preventing to publish ports outside the docker network you just
need to write on a single port in the ports segment.
Instead of using this:
ports:
- 8080:8080
Just use this one(at below):
ports:
- 8080