docker-compose.yml with Ngrok exposing two ports - docker

I am running Docker Desktop with docker-compose.yml file, and I use Ngrok to expose two ports. I use command "docker-compose up -d", however, I only managed to get one port (e.g. 7071) exposed as part of the yml file is below. I don't know how to get more than 1 ports working. Anyone knows hoe to do it.
ngrok:
image: wernight/ngrok:latest
ports:
- 4040:4040
environment:
NGROK_AUTH:
NGROK_PROTOCOL: https
NGROK_PORT: 7071
networks:
- default

Related

Hello world with gitlab ce docker container running on local ubuntu

I would like to run the docker image for gitlab community edition locally on my ubuntu laptop.
I am following this tutorial.
Currently there i already another app running on localhost so I changed the ports in docker -compose.
What I currently have: I'm in a directory I created called 'gitlab_test'. I have set a global variable per the instructions echo $GITLAB_HOME /srv/gitlab.
I pulled the ce gitlab image docker pull store/gitlab/gitlab-ce:11.10.4-ce.0
Then, in the gitlab_test directory I added a docker-compose file:
web:
image: 'gitlab/gitlab-ce:latest'
restart: always
hostname: 'localhost'
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url 'https://gitlab.example.com'
ports:
- '8080:8080'
- '443:443'
- '22:22'
volumes:
- '/srv/gitlab/config:/etc/gitlab'
- '/srv/gitlab/logs:/var/log/gitlab'
- '/srv/gitlab/data:/var/opt/gitlab'
I am unsure if I need to put 'localhost' in place of hostname and external url parameters. I tried that and as is and in each case I cannot see anything happen. I was expecting a web interface for gitlab at localhost:8080.
Tried docker-compose up and the terminal ran for a while with a bunch of output. There's no 'done' message (perhaps because I did not use -d?) but when I visit localhost:8080 I see no gitlab interface.
How can I run the gitlab ce container?
If you want to use different port you should not change your "container port". Only the host port you are exposing your container port to. So instead of:
ports:
- '8080:8080'
- '443:443'
- '22:22'
You should have done:
ports:
- '8080:80'
- '443:443'
- '22:22'
Which means you expose the internal container port 80 (which you cannot change) to your host port 8080.
UPD: I started this service locally and I think there are few things except ports to consider.
You should create $GITLAB_HOME folders (by this I mean that there is no need to register environment variable but rather to create set of dedicated folders). You take this '/srv/gitlab/config:/etc/gitlab' from example but this basically means "take content of srv/gitlab/config and mount it to the path /etc/gitlab" inside the container. I believe the paths like /srv/gitlab/config do not exist at your host.
Taking the above in the account I would suggest to create a separate folder (say my-gitlab) and create the folders config, logs and data inside that folder. They are to be empty but will be filled on Gitlab start.
Put your docker-compose.yaml to my-gitlab and switch to that folder.
Run docker-compose up from that folder. Do not use -d flag so that you're not detaching and can see if errors happen.
Below is my docker-compose.yaml with some explanation:
web:
image: 'gitlab/gitlab-ce:latest'
restart: always
hostname: 'localhost'
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url 'http://localhost'
ports:
- '54321:80'
- '54443:443'
- '5422:22'
volumes:
- './config:/etc/gitlab'
- './logs:/var/log/gitlab'
- './data:/var/opt/gitlab'
Explanation:
I have my local services running at 80, 8080, 22 and 443 so I expose all the ports to what I have free by the moment
At this part http://localhost the http:// is important. If you set https:// Gitlab attempts to request SSL certificate for your domain at Letsencrypt. To make this you have to have public domain and some sort of port configuration.
Volumes are mounted through . (current directory) so that it is important to have consistent structure and call docker-compose up from a proper place.
So in my case I could successfully connect to http://localhost:54321.

How can I run docker-compose multiple times without port issues?

I'm trying to use docker-compose to run continuous integration tests on a Jenkins server.
Here is my docker-compose.yml:
version: '3'
services:
elasticsearch:
container_name: elasticsearch_${INSTANCE}
image: docker.elastic.co/elasticsearch/elasticsearch:6.7.2
ports:
- 9200:9200
- 9300:9300
command: elasticsearch -E transport.host=0.0.0.0
environment:
ES_JAVA_OPTS: "-Xms2g -Xmx2g"
discovery-type: single-node
mainapp:
container_name: mainapp_${INSTANCE}
image: testbot:${INSTANCE}
environment:
ES_ADDRESS: http://elasticsearch_${INSTANCE}:9200
SUBSET: ${SUBSET}
DIRECTORY: ${DIRECTORY}
INSTANCE: ${INSTANCE}
TEST_CMD: ${TEST_CMD}
command: /bin/bash /mainapp/build/tests/wrapper.sh
This works great, but when I try to run multiple tests at the same time, the previously running test exits with code 137 immediately. I think this is because the services are binding to the host network, and I can't do that with multiple containers.
For my purposes, the two services that are started only need to communicate with each other, not with the host at all. I'm a bit confused with exactly how to network this.
You can do this by specifying a different project name using the COMPOSE_PROJECT_NAME environment variable or the --project-name flag for docker-compose. All services, networks, and volumes are created and named per-project.
You can drop the ports property.
If you wish, you can use the expose property instead (and then you only need to describe the container port, e.g. expose: - 9200) but expose is purely documentary and is not functionally required.
The ports property defines ports that will be exposed on the host.
If you don't want|need ports exposed on the host, you don't need it.

Why would i use docker links when i still need to hardcode the address?

Hello i have not understood the following :
-In the docker world we have from what i understood :
A port that the application exposes
A port that the container exposes for the application
A port that the host maps the container port
So given these facts in a configuration of 2 containers within docker-expose
If:
app | Host Port | Container Port | App Port
app1 8300 8200 8200
app2 9300 9200 9200
If app2 needs to communicate with with app1 directly through docker-host why would i use links ,since i still have to somehow hardcode in the environment of app2 the hostname and port of app1 (container_name of app1 and port of container of app1)?( In our example : port=8200 and host=app1Inst)
app1:
image: app1img
container_name: app1Inst
ports:
- 8300:8200 //application code exposes port 8200 - e.g sends to socket on 8200
networks:
- ret-net
app2:
image: app2img
container_name: app2Inst
ports:
- 9300:9200
depends_on:
- app1
networks:
- ret-net
links:
- app1
///i still need to say here
/ environment : -
/ - host=app1Inst
/ - port=8200 --what do i gain using links?
networks:
ret-net:
You do not need to use links on modern Docker. But you definitely should not hard-code host names or ports anywhere. (See for example every SO question that notes that you can interact with services as localhost when running directly on a developer system but needs some other host name when running in Docker.). The docker-compose.yml file is deploy-time configuration and that is a good place to set environment variables that point from one service to another.
As you note in your proposed docker-compose.yml file, Docker networks and the associated DNS service basically completely replace links. Links existed first but aren’t as useful any more.
Also note that Docker Compose will create a default network for you, and that the service block names in the docker-compose.yml file are valid as host names. You could reduce that file to:
version: '3'
services:
app1:
image: app1img
ports:
- '8300:8200'
app2:
image: app2img
ports:
- '9300:9200'
env:
APP1_URL: 'http://app1:8200'
depends_on:
- app1
Short answer, no you don't need links, also its now deprecated in docker & not recommended.
https://docs.docker.com/network/links/
Having said that, since both your containers are on the same network ret-net, they will be able to discover & communicate freely between each other on all ports, even without the ports setting.
The ports setting comes into play for external access to the container, e.g. from the host machine.
The environment setting just sets environment variables within the container, so the app knows how to find app1Inst & the right port 8200.

Running Ngrok in a container using docker

[https://github.com/gtriggiano/ngrok-tunnel ] runs ngrok inside a container. Ngrok is required to run in the container to avert security risks. But am facing problems after running the scripts, which generates the url
$ docker pull gtriggiano/ngrok-tunnel
$ docker run -it -e "TARGET_HOST=localhost" -e "TARGET_PORT=3000" -p 4040 gtriggiano/ngrok-tunnel
am running my rails app on localhost:3000
is it my problem or can it be fixed by altering the scripts(inside the repo)?
I couldn't get this working but switched to https://github.com/shkoliar/docker-ngrok and it works brilliantly.
In my case I added it to my docker-compose.yml file:
ngrok:
image: shkoliar/ngrok:latest
ports:
- 4551:4551
links:
- web
environment:
- PARAMS=http -region=eu -authtoken=${NGROK_AUTH_TOKEN} localdev.docker:80
networks:
dev_net:
ipv4_address: 10.5.0.10
And it's started with everything else when I do docker-compose up -d
Then there's a web UI at http://localhost:4551/ for you to see the status, requests, the ngrok URLs, etc.
The Github page does have examples of running it manually from the command line too though, rather than via docker-compose:
Command-line Example The example below assumes that you have running
web server docker container named dev_web_1 with exposed port 80.
docker run --rm -it --link dev_web_1 shkoliar/ngrok ngrok http dev_web_1:80
With command line usage, ngrok session is active until it
won't be terminated by Ctrl+C combination.
No. if you execute -p with single number it's container port - host port is randomly assigned.
Using -p, --publish ip:[hostPort]:containerPort at docker run can specify the the host port with the container port.
as of now the 4040 of container is exposed. Not sure if your service listens by default on it.
To get localhost port execute
docker ps
you'll see the actual port it's not listening on.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1aaaeffe789d gtriggiano/ngrok-tunnel "npm start" About a minute ago Up About a minute 0.0.0.0:32768->4040/tcp wizardly_poincare
here it's listening on localhost:32768
this composer works for me. Note that in the entrypoint command for ngrok you have to reference the other service by name
version: '3'
services:
yourwebserver:
build:
context: ./
dockerfile: ...
target: ...
container_name: yourwebserver
volumes:
- ...
ports:
- ...
extra_hosts:
- 'host.docker.internal:host-gateway'
depends_on:
- ngrok
ngrok:
image: ngrok/ngrok:alpine
environment:
NGROK_AUTHTOKEN: '...'
command: 'http yourwebserver:80'
ports:
- '4040:4040'
expose:
- '4040'
I'm not sure if you have already solved this but when I was getting this error I could only solve it like this:
# docker-compose.yml
networks:
- development
I also needed to expose the 3000 port of my web container because it still wasn't exposed.
# docker.compose.yml
web:
expose:
- "3000"
My container for the server running on development is also under the development network. The only parameters, I believe, you should pass for the container to execute are image, ports, environment with DOMAIN and PORT for the server container, a link, and an expose on your web container:
# docker-compose.yml
ngrok:
image: shkoliar/ngrok
ports:
- 4551:4551
links:
- web
networks:
- development
environment:
- DOMAIN=squad_web
- PORT=3000
Actually to make ngrok work with your docker container you can install it outside of your project just like the manual on their website says. And then add
nginx:
labels:
- "traefik.http.routers.${PROJECT_NAME}_nginx.rule=Host(`${PROJECT_BASE_URL}`, `aaa-abc-xxx-140-177.eu.ngrok.io`)"
This particular example is for docker4drupal docker-compose file and traefik mapped as 80:80

What is a docker-compose.yml file?

I can't find a real definition of what a docker-compose file is.
Is it correct to say this:
A docker-compose file is a YAML file that allows us to deploy multiples Docker containers at the same time.
I'd like to be able to explain a bit better what a docker-compose file is.
A docker-compose.yml is a config file for Docker Compose.
It allows to deploy, combine, and configure multiple docker containers at the same time. The Docker "rule" is to outsource every single process to its own Docker container.
Take for example a simple web application: You need a server, a database, and PHP. So you can set three docker containers with Apache2, PHP, and MySQL.
The advantage of Docker Compose is easy configuration. You don't have to write a big bunch of commands into Bash. You can predefine it in the docker-compose.yml:
db:
image: mysql
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: example_db
MYSQL_USER: root
MYSQL_PASSWORD: rootpw
php:
image: php
ports:
- "80:80"
- "443:443"
volumes:
- ./SRC:/var/www/
links:
- db
As you can see in my example, I define port forwarding, volumes for external data, and links to the other Docker container. It's fast, reproducible, and not that hard to understand.
The Docker Compose file format is formally specified which enables docker-compose.yml files being executed with something else than Docker, Podman for example.
Docker Compose is a tool that allows you to deploy and manage multiple containers at the same time.
A docker-compose.yml file contains instructions on how to do that.
In this file, you instruct Docker Compose for example to:
From where to take the Dockerfile to build a particular image
Which ports you want to expose
How to link containers
Which ports you want to bind to the host machine
Docker Compose reads that file and executes commands.
It is used instead of all optional parameters when building and running a single docker container.
Example:
version: '2'
services:
nginx:
build: ./nginx
links:
- django:django
- angular:angular
ports:
- "80:80"
- "8000:8000"
- "443:443"
networks:
- my_net
django:
build: ./django
expose:
- "8000"
networks:
- my_net
angular:
build: ./angular2
links:
- django:django
expose:
- "80"
networks:
- my_net
networks:
my_net:
external:
name: my_net
This example instructs Docker Compose to:
Build nginx from path ./nginx
Links angular and django containers (so their IP in the Docker network is resolved by name)
Binds ports 80, 443, 8000 to the host machine
Add it to network my_net
(so all 3 containers are in the same network and therefore accessible from each other)
Then something similar is done for the django and angular containers.
If you would use just Docker commands, it would be something like:
docker build --name nginx .
docker run --link django:django angular:angular --expose 80 443 8000 --net my_net nginx
So while you probably don't want to type all these options and commands for each image/container, you can write a docker-compose.yml file in which you write all these instructions in a human-readable format.

Resources