My Dockerrun.aws.json looks like this:
{
"AWSEBDockerrunVersion": 2,
"volumes": [
{
"name": "docker-socket",
"host": {
"sourcePath": "/var/run/docker.sock"
}
}
],
"containerDefinitions": [
{
"name": "nginx",
"image": "nginx",
"environment": [
{
"name": "VIRTUAL_HOST",
"value": "demo.local"
}
],
"essential": true,
"memory": 128
},
{
"name": "nginx-proxy",
"image": "jwilder/nginx-proxy",
"essential": true,
"memory": 128,
"portMappings": [
{
"hostPort": 80,
"containerPort": 80
}
],
"mountPoints": [
{
"sourceVolume": "docker-socket",
"containerPath": "/tmp/docker.sock",
"readOnly": true
}
]
}
]
}
Running this locally using "eb local run" results in:
ERROR: you need to share your Docker host socket with a volume at
/tmp/docker.sock Typically you should run your jwilder/nginx-proxy
with: -v /var/run/docker.sock:/tmp/docker.sock:ro See the
documentation at http://git.io/vZaGJ
If I ssh into my docker machine and run:
docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock
jwilder/nginx-proxy
It creates the container and mounts the volumes correctly.
Why is the above Dockerrun.aws.json configuration not mounting the /var/run/docker.sock:/tmp/docker.sock volume correctly?
If I run the same configuration from a docker-compose.yml, it works fine locally. However, I want to deploy this same configuration to Elastic Beanstalk using a Dockerrun.aws.json:
version: '2'
services:
nginx:
image: nginx
container_name: nginx
cpu_shares: 100
volumes:
- /var/www/html
environment:
- VIRTUAL_HOST=demo.local
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
cpu_shares: 100
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
My local setup is using:
VirtualBox 5.0.22 r108108
docker-machine version 0.7.0, build a650a40
EB CLI 3.7.7 (Python 2.7.1)
Your Dockerrun.aws.json file works fine in AWS EB for me (only changed it slightly to use our own container / hostname in place of the 'nginx' container). Is it just a problem with the 'eb local run' setup, perhaps?
As you said are on Mac, try upgrading to the new docker 1.12 that runs docker natively on osx, or at least a newer version of docker-machine - https://docs.docker.com/machine/install-machine/#/installing-machine-directly
Related
I try to start my app locally with aws s3 to upload images.
My docker-compose file looks like:
version: '3.8'
services:
localstack:
container_name: innoter_aws_services
image: localstack/localstack:latest
ports:
- "4566:4566"
environment:
- SERVICES=s3,ses,lambda,dynamodb
- EDGE_PORT=4566
- DATA_DIR=/tmp/localstack/data
- S3_DIR=localstack/s3
volumes:
- localstack-data:/var/lib/localstack
- /var/run/docker.sock:/var/run/docker.sock
env_file:
- docker_aws.env
volumes:
localstack-data:
name: localstack-data
then, for example:
(env) D:\>docker-compose up -d
[+] Running 1/1
- Container innoter_aws_services Started
(env) D:\>awslocal s3 mb s3://my-bucket
make_bucket: my-bucket
(env) D:\>awslocal s3 mb s3://my-bucket02
make_bucket: my-bucket02
(env) D:\>awslocal s3 ls
2023-01-30 12:19:08 my-bucket
2023-01-30 12:19:18 my-bucket02
then, after docker-compose stop, again do docker-compose up -d
and awslocal s3 ls is empty. No bucket, no information. When I inspect the bucket, I get info about mounts:
"Mounts": [
{
"Type": "volume",
"Source": "localstack-data",
"Target": "/var/lib/localstack",
"VolumeOptions": {}
}
"Mountpoint": "/var/lib/docker/volumes/localstack-data/_data",
"Name": "localstack-data",
"Options": null,
"Scope": "local"
What do I need to do for saving data locally in my volume?
I am trying to get angular and nginx containers in docker-compose to speak to each other on a google-compute vm instance (Debian OS), without success. Here is my docker-compose.yml:
version: '3'
services:
angular:
container_name: angular
hostname: angular
build: project-frontend
ports:
- "80:80"
#network_mode: host
nodejs:
container_name: nodejs
hostname: nodejs
build: project-backend
ports:
- "8080:8080"
# network_mode: host
I have read the docs and numerous SO posts such as this, and understand that angular should be trying to find node at http://nodejs:8080/, but I'm getting:
POST http://nodejs:8080/login/ net::ERR_NAME_NOT_RESOLVED
When I do docker networkk inspect I see this
[
{
"Name": "project_default",
"Id": "2d1665ce09f712457e706b83f4ae1139a846f9ce26163e07ee7e5357d4b28cd3",
"Created": "2020-05-22T11:25:22.441164515Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.28.0.0/16",
"Gateway": "172.28.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"b0fceb913ef14b0b867ae01ce4852ad4a0827c06194102082c0d4b18d7b80464": {
"Name": "angular",
"EndpointID": "83fba04c3cf6f7af743cae87116730805d030040f286706029da1c7a687b199c",
"MacAddress": "02:42:ac:1c:00:03",
"IPv4Address": "172.28.0.3/16",
"IPv6Address": ""
},
"c181cd4b0e9ccdd793c4e1fc49067ef4880cda91228a10b900899470cdd1a138": {
"Name": "nodejs",
"EndpointID": "6da8ad2a83e2809f68c310d8f34e3feb2f4c19b40f701b3b00b8fb9e6f231906",
"MacAddress": "02:42:ac:1c:00:02",
"IPv4Address": "172.28.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
I'm not sure what other steps can help me to debug this.
Thanks.
EDIT:
Thanks to this post I tried to ping nodejs container through the angular container successfully:
$ sudo docker exec -it angular ping nodejs
PING nodejs (172.28.0.2): 56 data bytes
64 bytes from 172.28.0.2: seq=0 ttl=64 time=0.079 ms
64 bytes from 172.28.0.2: seq=1 ttl=64 time=0.105 ms
I also tried tested the port on the nodejs constainer and it seems to be there:
$ sudo docker port nodejs
8080/tcp -> 0.0.0.0:8080
EDIT:
I'm starting to think this is a google compute VM question as I have it running on my local linux box without any problem...have updated the title accordingly
You need to make sure they are on the same network. You can do it by adding the following lines in your compose file at the end
networks:
default:
external:
name: project.local
Note, you have to create project.local network. When you run docker-compose up it'll tell you how to do it.
As #ShipluMokaddim says, containers must be in the same network ort hey can't hear each other, what I recommended is create a new network:
version: '3'
services:
angular:
container_name: angular
build: project-frontend
ports:
- "80:80"
networks:
- mynetwork
nodejs:
container_name: nodejs
build: project-backend
ports:
- "8080:8080"
networks:
- mynetwork
networks:
mynetwork:
Whit this you will be fine.
I have a docker-compose used for production which I'm hoping to incorporate with VS Code's dockerized development environment.
./docker-compose.yml
version: "3.6"
services:
django: &django-base
build:
context: .
dockerfile: backend/Dockerfile_local
restart: on-failure
volumes:
- .:/website
depends_on:
- memcached
- postgres
- redis
networks:
- main
ports:
- 8000:8000 # HTTP port
- 3000:3000 # ptvsd debugging port
expose:
- "3000"
env_file:
- variables/django_local.env
...
Note how I'm both forwarding and exposing port 3000 here. This is a result of me playing around to get what I need working. Not sure if I need one or the other or both.
My ./devcontainer then looks like the following:
./devcontainer/devcontainer.json
{
"name": "Dev Container",
"dockerComposeFile": ["../docker-compose.yml", "docker-compose.extend.yml"],
"service": "dev",
"workspaceFolder": "/workspace",
"shutdownAction": "stopCompose",
"settings": {
"terminal.integrated.shell.linux": null,
"python.linting.pylintEnabled": true,
"python.pythonPath": "/usr/local/bin/python3.8"
},
"extensions": [
"ms-python.python"
]
}
.devcontainer/docker-compose.extended.yml
version: '3.6'
services:
dev:
build:
context: .
dockerfile: ./Dockerfile
external_links:
- django
volumes:
- .:/workspace:cached
command: /bin/sh -c "while sleep 1000; do :; done"
The idea is that I want to be able to run VS code attached to the dev service, which from there I want to run the debugger attached to the django service using the following launch.json config:
{
"name": "WP",
"type": "python",
"request": "attach",
"port": 3000,
"host": "localhost",
"pathMappings": [
{
"localRoot": "${workspaceFolder}",
"remoteRoot": "/website"
}
]
},
I get an error when doing this though, where VS Code says connect ECONNREFUSED 127.0.0.1:3000. How can I get the ports mapped so this will work? Is it even possible?
Edit
Why not just attach directly to the django service?
The dev container simply contains python and node runtimes for linting and intellisense purposes while using VS Code. The idea behind creating a new service devoted specifically to debugging in the dev environment is that ./docker-compose.yml contains more than a few services that some of the devs on my team like to turn off sometimes to keep resource consumption low. By creating a container specifically for dev, it also makes it easier to setup .devcontainer-devcontainer.json to add things like extensions to one container without needing to add them after attaching to the running "non-dev" container. If this were to work, VS Code would be running within the dev container (see this).
I was able to solve this by changing the host in the launch.json from localhost to host.docker.internal. The resulting launch.json configuration then looks like this:
{
"name": "WP",
"type": "python",
"request": "attach",
"port": 3000,
"host": "host.docker.internal",
"pathMappings": [
{
"localRoot": "${workspaceFolder}",
"remoteRoot": "/web-portal"
}
]
},
Is it possible to set in the docker compose file the container information about NetworkSetting into the environment variable?
I have the following docker-compose.yml file:
version: '3.7'
services:
sdt-proxy:
image: myimage
ports:
- 32770-32780:8181
environment:
- SERVER_PORT=8181
It maps the port 8181 to a random port from 32770-32780. when I run the container with docker-compose up, I can see the mapped port with docker inspect:
.....
"NetworkSettings": {
"Bridge": "",
"SandboxID": "83e6933aaf7b09b8ae1238d3dbb71bdd495c14927a5a509b332afc17cda6d854",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"8181/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "32771"
}
]
},
...
So, I know that the internal port 8181 (my application running inside the container), is mapped to the port 32771.
I need to pass this information, the container port 32771 to my application, is it possible to do something like this in the docker-compose file?
version: '3.7'
services:
sdt-proxy:
image: myimage
ports:
- 32770-32780:8181
environment:
- SERVER_PORT=8181
- MY_CONTAINER_PORT= <the running container port 32771>
I am deploying a compose file onto the UCP via:
docker stack deploy -c docker-compose.yml custom-stack-name
In the end I want to deploy multiple compose files (each compose file describes the setup for a separate microservice) onto one docker network e.g. appsnetwork
version: "3"
services:
service1:
image: docker/service1
networks:
- appsnetwork
customservice2:
image: myprivaterepo/imageforcustomservice2
networks:
- appsnetwork
networks:
appsnetwork:
The docker stack deploy command automatically creates a new network with a generated name like this: custom-stack-name_appsnetwork
What are my options?
Try to create the network yourself first
docker network create --driver=overlay --scope=swarm appsnetwork
After that make the network external in your compose
version: "3"
services:
service1:
image: nginx
networks:
- appsnetwork
networks:
appsnetwork:
external: true
After that running two copies of the stack
docker stack deploy --compose-file docker-compose.yml stack1
docker stack deploy --compose-file docker-compose.yml stack2
Docker inspect for both shows IP in same network
$ docker inspect 369b610110a9
...
"Networks": {
"appsnetwork": {
"IPAMConfig": {
"IPv4Address": "10.0.1.5"
},
"Links": null,
"Aliases": [
"369b610110a9"
],
$ docker inspect e8b8cc1a81ed
"Networks": {
"appsnetwork": {
"IPAMConfig": {
"IPv4Address": "10.0.1.3"
},
"Links": null,
"Aliases": [
"e8b8cc1a81ed"
],