how to use docker container Options inside docker-compose - docker

i am using akka http server in my app and mongodb as a backed database, akka http uses standard input to keep running the server,
here is how i am binding it
val host = "0.0.0.0"
val port = 8080
val bindingFuture = Http().bindAndHandle(MainRouter.routes, host, port)
log.info("Server online ")
StdIn.readLine()
bindingFuture
.flatMap(_.unbind()) // trigger unbinding from the port
.onComplete(_ => system.terminate()) // and shutdown when done
i need to dockerize my app docker closes the standard input by default when it starts the container, to keep it running we need to provide -i option with the container like this
docker run -p 8080:8080 -i imagename:tag
now the problem is i need to use docker-compose to start my app with mongo
here is my docker-compose.yml
version: '3.3'
services:
mongodb:
image: mongo:4.2.1
container_name: docker-mongo
ports:
- "27017:27017"
akkahttpservice:
image: app:0.0.1
container_name: docker-app
ports:
- "8080:8080"
depends_on:
- mongodb
how can i provide the -i option with docker-app container
Note after doing docker-compose up
docker exec -it containerid sh
did not worked for me
Any help would be appreciated

Related

let docker access consul in docker compose

The docker compose file:
version: '3'
services:
rs:
build: .
ports:
- "9090:9090"
consul:
image: "consul"
ports:
- "8500:8500"
hostname: "abc"
rs is a go micro server app which will access consul and the consul address configured in a file like:
"microservice": {
"serviceName": "hello",
"registry": "consul=abc:8500",
However this don't work, the rs report error log:
register error: Put http://abc:8500/v1/agent/service/register: dial tcp: lookup abc on 127.0.0.11:53: no such host
I can access consul ui in host machine by http://127.0.0.1:8500, it works properly.
How to configure the network let rs can access consul?
You have changed hostname of the consul container, however rs service is not aware of this, and it attempts to resolve abc by querying default DNS server which is 127.0.0.11 port 53. You can see this in error message, since this DNS server is unable to resolve abc as it has no information about it.
The easiest way to solve this, and have it working in docker-compose, on the network created between the services in docker-compose is following:
version: '3'
services:
rs:
build: .
# image: aline:3.7
ports:
- "9090:9090"
# command: sleep 600
networks:
rs-consul:
consul:
image: "consul"
ports:
- "8500:8500"
hostname: "abc"
networks:
rs-consul:
aliases:
- abc
networks:
rs-consul:
This will create new network rs-consul (check with docker network ls, it will have some prefix, mine had working_directory_name_ as prefix). And on this network the Consul machine has alias abc. Now your rs service should be able to reach Consul service via http://abc:8500/
I have used commented lines (image:alpine:3.7 and command: sleep 600) instead of build: ., to test the connection, since I don't have your rs service code to use in build:. Once containers are started, I used docker exec -it <conatiner-id> sh to start shell on rs container, then installed curl and was able to retrieve Consul UI page via following command:
curl http://abc:8500/ui/
Hope this helps.

Is it possible to curl across docker network via docker-compose between 2 docker-compose.yaml?

I have 2 application run with a different network and it uses separate docker-compose.yaml. So I trying to call an request from app A to app B, but it not works.
docker exec -it app_a_running curl http://localhost:8012/user/1
So I got an error
cURL error 7: Failed to connect to localhost port 8012
docker-compose-app-a.yaml
version: "3"
services:
app:
build: go/
restart: always
ports:
- 8011:8011
volumes:
- ../src/app:/go/src/app
working_dir: /go/src/app
container_name: app-a
command: sleep 72000
networks:
- app-a-network
networks:
app-a-network:
docker-compose-app-b.yaml
version: "3"
services:
app:
build: go/
restart: always
ports:
- 8012:8012
volumes:
- ../src/app:/go/src/app
working_dir: /go/src/app
container_name: app-b
command: sleep 72000
networks:
- app-b-network
networks:
app-b-network:
Questions:
Is it possible to do this?
If the first question is possible, Please suggest me :)
You can use curl on docker containers. The reason why your curl command didn't work is probably that you did not publish your docker container's port. For example, try:
docker run -d -p 8080:8080 tomcat
instead of
docker run -d tomcat
This will forward the port 8080 of your machine to the port 8080 of your container.
If you have a shell to your container, you can use the service name or the container's name to curl a container on your Docker network, provided your target exists with the same network.

Running Ngrok in a container using docker

[https://github.com/gtriggiano/ngrok-tunnel ] runs ngrok inside a container. Ngrok is required to run in the container to avert security risks. But am facing problems after running the scripts, which generates the url
$ docker pull gtriggiano/ngrok-tunnel
$ docker run -it -e "TARGET_HOST=localhost" -e "TARGET_PORT=3000" -p 4040 gtriggiano/ngrok-tunnel
am running my rails app on localhost:3000
is it my problem or can it be fixed by altering the scripts(inside the repo)?
I couldn't get this working but switched to https://github.com/shkoliar/docker-ngrok and it works brilliantly.
In my case I added it to my docker-compose.yml file:
ngrok:
image: shkoliar/ngrok:latest
ports:
- 4551:4551
links:
- web
environment:
- PARAMS=http -region=eu -authtoken=${NGROK_AUTH_TOKEN} localdev.docker:80
networks:
dev_net:
ipv4_address: 10.5.0.10
And it's started with everything else when I do docker-compose up -d
Then there's a web UI at http://localhost:4551/ for you to see the status, requests, the ngrok URLs, etc.
The Github page does have examples of running it manually from the command line too though, rather than via docker-compose:
Command-line Example The example below assumes that you have running
web server docker container named dev_web_1 with exposed port 80.
docker run --rm -it --link dev_web_1 shkoliar/ngrok ngrok http dev_web_1:80
With command line usage, ngrok session is active until it
won't be terminated by Ctrl+C combination.
No. if you execute -p with single number it's container port - host port is randomly assigned.
Using -p, --publish ip:[hostPort]:containerPort at docker run can specify the the host port with the container port.
as of now the 4040 of container is exposed. Not sure if your service listens by default on it.
To get localhost port execute
docker ps
you'll see the actual port it's not listening on.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1aaaeffe789d gtriggiano/ngrok-tunnel "npm start" About a minute ago Up About a minute 0.0.0.0:32768->4040/tcp wizardly_poincare
here it's listening on localhost:32768
this composer works for me. Note that in the entrypoint command for ngrok you have to reference the other service by name
version: '3'
services:
yourwebserver:
build:
context: ./
dockerfile: ...
target: ...
container_name: yourwebserver
volumes:
- ...
ports:
- ...
extra_hosts:
- 'host.docker.internal:host-gateway'
depends_on:
- ngrok
ngrok:
image: ngrok/ngrok:alpine
environment:
NGROK_AUTHTOKEN: '...'
command: 'http yourwebserver:80'
ports:
- '4040:4040'
expose:
- '4040'
I'm not sure if you have already solved this but when I was getting this error I could only solve it like this:
# docker-compose.yml
networks:
- development
I also needed to expose the 3000 port of my web container because it still wasn't exposed.
# docker.compose.yml
web:
expose:
- "3000"
My container for the server running on development is also under the development network. The only parameters, I believe, you should pass for the container to execute are image, ports, environment with DOMAIN and PORT for the server container, a link, and an expose on your web container:
# docker-compose.yml
ngrok:
image: shkoliar/ngrok
ports:
- 4551:4551
links:
- web
networks:
- development
environment:
- DOMAIN=squad_web
- PORT=3000
Actually to make ngrok work with your docker container you can install it outside of your project just like the manual on their website says. And then add
nginx:
labels:
- "traefik.http.routers.${PROJECT_NAME}_nginx.rule=Host(`${PROJECT_BASE_URL}`, `aaa-abc-xxx-140-177.eu.ngrok.io`)"
This particular example is for docker4drupal docker-compose file and traefik mapped as 80:80

Interact with redis container started with docker compose

I have a docker compose file that links my server to a redis image:
version: '3'
services:
api:
build: .
command: npm run dev
environment:
NODE_ENV: development
volumes:
- .:/home/node/code
- /home/node/code/node_modules
- /home/node/code/build/Release
ports:
- "1389:1389"
depends_on:
- redis
redis:
image: redis:alpine
I am wondering how could I open a redis-cli against the Redis container started by docker-compose to directly modify ke/value pairs. I tried with docker attach but it does not open any shell.
Use docker exec -it your_container_name /bin/bash to enter into redis container, then execute redis-cli to modify key-value pair.
See https://docs.docker.com/engine/reference/commandline/exec/
Install the Redis CLI on your host. Edit the YAML file to publish Redis's port
services:
redis:
image: redis:alpine
ports: ["6379:6379"]
Then run docker-compose up to redeploy the container, and you can run redis-cli from the host without needing to directly interact with Docker.
Using /bin/bash as the command (as suggested in the accepted solution) doesn't work for me with the latest redis:alpine image on Linux.
Instead, this worked:
docker exec -it your_container_name redis-cli

Turning down a web server from a running container via bash

I have the following docker-compose.yml to start a webserver with PHP.
version: "2.0"
services:
nginx:
image: nginx
ports:
- "8000:80"
volumes:
- ./web:/web
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
php:
image: php:${PHP_VERSION}-fpm
volumes:
- ./web:/web
After running docker-compose up, I can access my website perfectly at http://localhost:8000. But if I then access the nginx container, with:
$ docker-compose run nginx bash
and within the container I run:
$ service nginx stop
I still can see the website http://localhost:8000 being displayed in the browser.
How can it be that after stopping the server in the container, the website is still being delivered?
The docker-compose run command starts a new container, you're stopping nginx in that new container, what you want is docker attach nginx
The documentation is located here.

Resources