How to run docker via docker-compose up in ecs2 fargate - docker

I need to add extra_hosts to my container.
Here's the docker-compose file
version: '3'
services:
nodejs:
extra_hosts:
- "<name here>:<ip here>"
- "<name here>:<ip here>"
dns:
- <ip here>
- <ip here>
- <ip here>
network_mode: 'host'
build:
context: .
dockerfile: Dockerfile
I am starting container locally and logging /etc/hosts in the app.
If I start the container with docker-compose up, I can see extra hosts added in to /etc/hosts
If I start container via docker run <container> host file is not changed.
Same happens on AWS deployment on EC2 Fargate.
Is there a way to start container in Fargate with docker-compose up ?
Or any other solutaion which will start container on fargate according docker-compose file?
Thanks.

If you run the shown docker-compose file you indeed add additional hosts in the /etc/hosts file because of the extra_hosts entry in your yaml file.
To run the container with docker-run you should add the --add-host[1] flags to the command.
So your command would become:
docker-run <container> --add-host "<name here>:<ip here>" --add-host "<name here>:<ip here>"
[1] https://docs.docker.com/engine/reference/run/#managing-etchosts

Related

How do I execute a shell script against my localstack Docker container after it loads?

I want to spin up a localstack docker container and run a file, create_bucket.sh, with the command
aws --endpoint-url=http://localhost:4566 s3 mb s3://my-bucket
after the container starts. I tried creating this Dockerfile
FROM: localstack/localstack:latest
COPY create_bucket.sh /usr/local/bin/
ENTRYPOINT []
and a docker-compose.yml file that has
version: '3.8'
services:
localstack:
image: localstack/localstack:latest
environment:
...
ports:
- '4566-4583:4566-4583'
command: sh -c "/usr/local/bin/create_bucket.sh"
but when I run
docker-compose up
the container comes up, but the command isn't run. How do I execute my command against the localstack container after container startup?
You can use mount volume instead of "command" to execute your script at startup container.
volumes:
- ./create_bucket.sh:/docker-entrypoint-initaws.d/create_bucket.sh
Also as they specify in their documentation, localstack must be precisely configured to work with docker-compose.
Please note that there’s a few pitfalls when configuring your stack manually via docker-compose (e.g., required container name, Docker network, volume mounts, environment variables, etc.)
In your case I guess you are missing some volumes, container name and variables.
Here is an example of a docker-compose.yml found here, which I have more or less adapted to your case
version: '3.8'
services:
localstack:
image: localstack/localstack
container_name: localstack-example
hostname: localstack
ports:
- "4566-4583:4566-4583"
environment:
# Declare which aws services will be used in localstack
- SERVICES=s3
- DEBUG=1
# These variables are needed for localstack
- AWS_DEFAULT_REGION=<region>
- AWS_ACCESS_KEY_ID=<id>
- AWS_SECRET_ACCESS_KEY=<access_key>
- DOCKER_HOST=unix:///var/run/docker.sock
- DATA_DIR=/tmp/localstack/data
volumes:
- "${TMPDIR:-/tmp}/localstack:/tmp/localstack"
- /var/run/docker.sock:/var/run/docker.sock
- ./create_bucket.sh:/docker-entrypoint-initaws.d/create_bucket.sh
Other sources:
Running shell script against Localstack in docker container
https://docs.localstack.cloud/localstack/configuration/
If you exec into the container, the create_bucket.sh is not copied. I'm not sure why and I couldn't get it to work either.
However, I have a working solution if you're okay to have a startup script as your goal is to bring up the container and execute the creation of the bucket in a single command.
Assign a name to your container in docker-compose.yml
version: '3.8'
services:
localstack:
image: localstack/localstack:latest
container_name: localstack
ports:
- '4566-4583:4566-4583'
Update your create_bucket.sh to use awslocal instead, it is already available in the container. Using aws cli with an endpoint-url needs aws configure as a pre-req.
awslocal s3 mb s3://my-bucket
Finally, create a startup script that runs the list of commands to complete the initial setup.
docker-compose up -d
docker cp create_bucket.sh localstack:/usr/local/bin/
docker exec -it localstack sh -c "chmod +x /usr/local/bin/create_bucket.sh"
docker exec -it localstack sh -c "/usr/local/bin/create_bucket.sh"
Execute the startup script
sh startup.sh
To verify, if you now exec into the running container, the bucket would have been created.
docker exec -it localstack /bin/sh
awslocal s3 ls
Try by executing below
docker exec Container_ID Your_Command

how to use docker-compose start up host docker container with volumes

i am running docker on windows 10, and have a jenkins container
i can use container jenkins pipeline to build host image
docker -H host.docker.internal:2375 tag myproject:1.0 myproject:latest
i can start host container use docker-compose
docker-compose -H host.docker.internal:2375 -f /var/jenkins_home/myproject/docker-compose.yml up -d
the only issue is, if have 'volumes' in docker-compose.yml, it will display error below.
Named volume "C:\docker\myproject:/target/myproject" is used in service "myproject" but no declaration was found in the volumes section.
docker-compose.yml file
version: '3.9'
services:
myproject:
image: myproject:latest
user: root
container_name: myproject
volumes:
- C:\docker\myproject:/target/myproject
ports:
- 8080:8080
i understand it is because jenkins container cannot found 'C:\docker\myproject', but i want share this folder between host and myproject container.
i tried use below command in jenkins container but it is not working, -f only can read local container file
docker-compose -H host.docker.internal:2375 -f c:/myproject/docker-compose.yml up -d
any idea can run docker-compose with volumes in jenkins container to control host docker?
update problem solved by below
version: '3.9'
services:
myproject:
image: myproject:latest
user: root
container_name: myproject
volumes:
- type: bind
source: C:\docker\myproject
target: /target/myproject
ports:
- 8080:8080

Creating and copying files to volume within the gitlab ci-cd

I created a Gitlab CI CD pipline with the gitlab runner and gitlab itself.
right now everything runs besides one simple script.
It does not copy any files to the volume.
I'm using docker-compose 2.7
I also have to say, that I'm not 100% sure about the volumes.
Here is an abstract of my .gitlab-ci.yml
stages:
- build_single
- test_single
- clean_single
- build_lb
- test_lb
- clean_lb
Build_single:
stage: build_single
script:
- docker --version
- docker-compose --version
- docker-compose -f ./NodeApp/docker-compose.yml up --scale slave=1 -d
- docker-compose -f ./traefik/docker-compose_single.yml up -d
- docker-compose -f ./DockerJMeter/docker-compose.yml up --scale slave=10 -d
When I'm using ls, all the files are in the correct folder.
Docker-compose:
version: '3.7'
services:
reverse-proxy:
# The official v2.0 Traefik docker image
image: traefik:v2.0
# Enables the web UI and tells Traefik to listen to docker
command: --api.insecure=true --providers.docker
ports:
# The HTTP port
- "7000:80"
# The Web UI (enabled by --api.insecure=true)
- "7080:8080"
volumes:
# So that Traefik can listen to the Docker events
- /var/run/docker.sock:/var/run/docker.sock
- ./traefik/config_lb:/etc/traefik
networks:
- default
networks:
default:
driver: bridge
name: traefik
For JMeter I'm using the copy statement to get the configuration files after it startet. but for traefik I need the files on the booting process for traefik.
I thought ./traefik/config_lb:/etc/traefik with a '.' in front of traefik creates a path in respect to the docker-compose file.
Is this wrong?
I also have to say, that gitlab and the runner are both dockerized on the host system. So the instanz of docker is running on the host system, and gitlab-runner also using the docker.sock.
Best Regards!
When you use the gitlab-runner in a docker container, it starts another container, the gitlab-executor based on an image that you specify in .gitlab-ci.yml. The gitlab-runner uses the docker sock of the docker host (see /var/run/docker.sock:/var/run/docker.sock in /etc/gitlab-runner/config.toml) to start the executor.
When you then start another container using docker-compose, again the docker sock is used. Any source paths that you specify in docker-compose.yml have to point to paths on the docker host, otherwise the destination in the created service will be empty (given the source path does not exist).
So what you need to do is find the path to traefik/config_lb on the docker host and provide that as the source.

Docker-compose volume starting empty in container

I have the following docker-compose.yml configuration:
version: '3'
services:
proxy:
image: nginx:latest
container_name: webproxy
ports:
- "80:80"
volumes:
- /etc/nginx/sites-available:/etc/nginx/sites-available
On my host machine I have a nginx.conf file at /etc/nginx/sites-available/nginx.conf.
Steps:
Start the container with docker-compose -up
Go into the command line of the container with sudo docker exec -it 687 /bin/bash
cd into /etc/nginx/sites-available
Unfortunately the folder in step 3 is empty. My nginx.conf file is not being copied.
Is my docker-compose file not configured properly, or are volumes not supposed to also copy and start with the host data?
Doesn't looks anything wrong in docker-compose.yaml , because I used the same file as mentioned by you to create docker container. It worked for me. check your content inside /etc/nginx/sites-available on your host machine.

Dynamically add docker container ip in Dockerfile ( redis)

How do I dynamically add container ip in other Dockerfile ( I am running two container a) Redis b) java application .
I need to pass redis url on run time to my java arguments
Currently I am manually checking the redis ip and copying it in Dockerfile. and later creating new image using redis ip for java application.
docker run --name my-redis -d redis
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' my-redis
IN Dockerfile (java application)
CMD ["-Dspring.redis.host=172.17.0.2", "-jar", "/apps/some-0.0.1-SNAPSHOT.jar"]
Can I use any script to update the DockerFile or can use any environment variable.
you can assign a static ip address to your dokcer container when you run it, following the steps:
1 - create custom network:
docker network create --subnet=172.17.0.0/16 redis-net
2 - run the redis container to use the specified network, and assign the ip address:
docker run --net redis-net --ip 172.17.0.2 --name my-redis -d redis
by then you have the static ip address 172.17.0.2 for my-redis container, you don't need to inspect it anymore.
3 - now it is possible to run the java appication container but it must use the same network:
docker run --net redis-net my-java-app
of course you can optimize the solution, by using env variables or whatever you find convenient to your setup.
More infos can be found in the official docs (search for --ip):
docker run
docker network
Edit (add docker-compose):
I just find out that it is also possible to assign static ips using docker-compose, and this answer gives an example how.
This is a similar example just in case:
version: '3'
services:
redis:
container_name: redis
image: redis:latest
restart: always
networks:
vpcbr:
ipv4_address: 172.17.0.2
java-app:
container_name: java-app
build: <path to Dockerfile>
networks:
vpcbr:
ipv4_address: 172.17.0.3
depends_on:
- redis
networks:
vpcbr:
driver: bridge
ipam:
config:
- subnet: 172.17.0.0/16
gateway: 172.17.0.1
official docs: https://docs.docker.com/compose/networking/
hope this helps you find your way.
You should add your containers in the same network . Then at runtime you can use that name to refer to the container with its name. Container's name is the host name in the network. Thus at runtime it will be resolved as container's ip address.
Follow these steps:
First, create a network for the containers:
docker network create my-network
Start redis: docker run -d --network=my-network --name=redis redis
Edit java application's Dockerfile, replace -Dspring.redis.host=172.17.0.2" with -Dspring.redis.host=redis" and build again.
Finally start java application container: docker run -it --network=my-network your_image. Optionally you can define a name for the container, but it is not required as you do not access java application's container from redis container.
Alternatively you can use a docker-compose file. By default docker-compose creates a network for running services. I am not aware of your full setup, so I will provide a sample docker-compose.yml that illustrates the main concept.
version: "3.7"
services:
redis:
image: redis
java_app_image:
image: your_image_name
In both ways, you are able to access redis container from java application dynamically using container's hostname instead of providing a static ip.

Resources