I am having trouble getting extensions installed in the dev container using "Remote - Containers". I do not know if it's a bug, incorrect configuration on my end, or intended behaviour. Down below is my current configuration, both files are located in the root folder of the project.
docker-compose.yml
version: "3.7"
services:
api:
image: node:12
restart: always
ports:
- ${API_PORT}:3000
volumes:
- .:/usr/app
- /usr/app/node_modules
working_dir: /usr/app
command: bash -c "yarn && yarn dev"
.devcontainer.json
{
"dockerComposeFile": "docker-compose.yml",
"service": "api",
"workspaceFolder": "/usr/app",
"extensions": [
"eamodio.gitlens",
"formulahendry.auto-rename-tag",
"coenraads.bracket-pair-colorizer-2",
"kumar-harsh.graphql-for-vscode",
"esbenp.prettier-vscode",
"ms-vscode.vscode-typescript-tslint-plugin",
"visualstudioexptteam.vscodeintellicode"
]
}
The list of extensions listed in the .devontainer.json are the ones I want to have installed in the dev container. Any help is appreciated!
According to the Visual Studio Code documentation, the two files need to be located in a directory .devcontainer in the workspace root.
I still had issues installing the extensions while working from behind a corporate proxy. The solution was to give the container access to the proxy server:
If you use an authenticating proxy like Cntlm on your local machine, configure it to listen on 172.17.0.1 (the Docker interface). Then define the http_proxy and https_proxy environment variables for the container. For example, in devcontainer.json:
"containerEnv": {
"http_proxy": "http://172.17.0.1:3128",
"https_proxy": "http://172.17.0.1:3128"
}
Or in docker-compose.yaml
services:
devcontainer:
environment:
http_proxy: http://172.17.0.1:3128
https_proxy: http://172.17.0.1:3128
Or configure docker-compose.yaml to make the container use the host network:
services:
devcontainer:
network_mode: host
Then you can just pass the same proxy variables into the container as used on the host. For example, in the docker-compose.yaml:
services:
devcontainer:
environment:
http_proxy: $http_proxy
https_proxy: $https_proxy
If you are not using a local, but rather a remote proxy inside of your network, you can do the latter regardless of the container's network configuration (host or default).
This answer is only applicable if your network environment requires the use of a proxy server.
According to "Known limitations" of "Developing inside a Container"...
Local proxy settings are not reused inside the container, which can prevent extensions from working unless the appropriate proxy information is configured (for example global HTTP_PROXY or HTTPS_PROXY environment variables with the appropriate proxy information).
I was able to set the proxy environment variables by appending to runArgs in 'devcontainer.json': "--net=host", "--env=https_proxy=(your_proxy_host:port)".
Alternatively, if host-access is not required for the proxy, you can just append to '.devcontainer/Dockerfile':
ENV https_proxy=(your_proxy_host:port)
Enabling proxy access in this way may also be necessary for network access from your application (not just for vscode extensions).
See also: How do I pass environment variables to Docker containers?
I also have issues installing the extensions while working from behind a corporate proxy. The solution was to give the container access to the proxy server and set HTTP proxy strict SSL:
"settings": {
"http.proxy": "(your_proxy_host:port)",
"http.proxyStrictSSL": false
},
Related
Hi guys let me as about Docker..
so I have more Docker container and I successfully to connect container to each container with same network...
but if I set a proxy connection it make my container can not connect to each other container... what must i do guys?
here is my docker-compose.yml file
version: '3.7'
services:
membership:
container_name: membership_service
build:
context: ./membership
target: dev
ports:
- 6010:6010
volumes:
- ./membership:/app
network_mode: simas_net
proxy:
container_name: nginx_proxy
build:
context: ./proxy
target: dev
ports:
- 8000:80
volumes:
- ./proxy/config:/etc/nginx/conf.d
network_mode: simas_net
and here is proxy config
~/.docker/config.json
{
"proxies": {
"default": {
"httpProxy": "http://192.168.49.1:8000/",
"httpsProxy": "http://192.168.49.1:8000/",
"noProxy": "localhost,127.0.0.0/8"
}
}
}
thanks for you're help guys
This is not a big problem. Your ~/.docker/config.json is configured to automatically set three proxy environment variables on starting containers.
http_proxy
https_proxy
no_proxy
Now the software inside your containers seems to react to those variables (which is expected, you usually configure those settings, to have the containers use the proxy)
In your case however it seems that your server software picks those settings up and automatically tries to connect to the other container hostname over the proxy -> which fails because the proxy is not inside the container network, therefore he's not able to help establishing the connection.
multiple solutions possible:
configure the system which can't connect to avoid the proxy/direct connect. you might want to limit it to:
the service (container name) it can't connect at the moment
manually add a no_proxy value with the others container name -> hope that the software will interpret it correctly (no_proxy is not standardized)
remove the general proxy configuration and explicitly set the env variables (http_proxy, https_proxy, no_proxy) inside your compose files for services who need this proxy for communication
add the container name of the target service to your noProxy config - not recommended -> needs deamon restarts everytime you run into this problem with a new container etc. might be okay, if the containers will not change in your setup.
I'm trying to build an application that is able to use local integration testing via Docker Compose with Google Cloud emulator containers, while also being able to run that same Docker Compose configuration on a Docker-based CI/CD tool (Google Cloud Build).
The kind of docker-compose.yml configuration I'm using is:
version: '3.7'
services:
main-application:
build:
context: .
target: dev
image: main-app-dev
container_name: main-app-dev
network_mode: $DOCKER_NETWORK
environment:
- MY_ENV=my_env
command: ["sh", "-c", "PYTHONPATH=./ python app/main.py"]
volumes:
- ~/.config:/home/appuser/.config
- ./app:/home/appuser/app
- ./tests:/home/appuser/tests
depends_on:
- firestore
firestore:
image: google/cloud-sdk
container_name: firestore
network_mode: $DOCKER_NETWORK
environment:
- GCP_PROJECT_ID=dummy-project
command: ["sh", "-c", "gcloud beta emulators firestore start --project=$$GCP_PROJECT_ID --host-port=0.0.0.0:9000"]
ports:
- "9000:9000"
I added the network_mode arguments to enable the configuration to use the "cloudbuild" network type available on the CI/CD pipeline, which is currently working perfectly. However this network configuration is not available to local Docker, hence why I've tried to use environment variables to enable the switch depending on local vs Cloud Build CI/CD environment.
Before I added these network_mode params/args for the CI/CD, the local testing was working just fine. However since I added them, my application either can't run, or can't connect to its accompanying services, like the firestore one specified in the YAML above.
I have tried the following valid Docker network modes with no success:
"bridge" - runs the service, but doesn't allow connection between containers
"host" - doesn't allow the service to run because of not being compatible with assigning ports
"none" - doesn't allow the service to connect externally
"service" - doesn't allow the service to run due to invalid mode/service
Anyone able to provide advice on what I'm missing here?
I would assume one of these network modes would be what Docker Compose would be using if the network_mode is not assigned, so I'm not sure why all of them are failing.
I want to avoid having a separate cloud build file for the remote and local configurations, and would also like to avoid the hassle of setting up my own docker network locally. Ideally if there were some way of only applying network_mode only remotely, that would work best in my case I think.
TL;DR:
Specifying network_mode does not give me the same result as not specifying it when running docker-compose up locally.
Due to running the configuration in the cloud I can't avoid specifying it.
Found a solution thanks to this thread and the comment by David Maze.
As far as I understand it, Docker Compose when not provided a specific network_mode for all the containers, creates its own private default network, named after the folder in which the docker-compose.yml file exists (as default).
Specifying a network mode like the default "bridge" network, without using this custom network created by docker compose means container discovery between services isn't possible, as in main-application couldn't find the firestore:9000 container.
Basically all I had to do was set the network_mode variable to myapplication_default, if the folder where docker-compose.yml sat in was called "MyApplication", to force app apps to use the same custom network set up in docker-compose up
I have a docker-compose.yml file where I define a number of services. One of the services requires an environmental variable which is specific to the host computer (e.g. the IP of the device on the local WiFi network). Rather than hard-coding that, I would prefer to use a bash expression which, when evaluated, will return the correct value.
It seems that the correct way to pass in such environmental variables is through a dot-env file placed in the current working directory. See current code below:
docker-compose.yml:
version: '3.1'
services:
...
mobile:
build:
context: .
dockerfile: Dockerfile.mobile
command: bash -c "export BASE_URL=$$(node base_url.js)
&& npm start --lan"
environment:
- REACT_NATIVE_PACKAGER_HOSTNAME=${HOST_DEVICE_IP}
depends_on:
- ngrok
ports:
# Expose Metro Bundler
- 19001:19001
# Expose Expo DevTools
- 19002:19002
# Expose App
- 19000:19000
.env:
HOST_DEVICE_IP=$(ipconfig getifaddr en0)
However, when launching the services with docker-compose, the REACT_NATIVE_PACKAGER_HOSTNAME envvar has the value $(ipconfig getifaddr en0). In other words, the .env file is not evaluating the expression and is injecting as a string.
TLDR: How can I pass environmental variables into a container where the value can be a bash expression that is evaluated on the host machine?
Remove HOST_DEVICE_IP from your .env file, and instead set it on the command line when running docker-compose up:
HOST_DEVICE_IP=$(ipconfig getifaddr en0) docker-compose up -d
But I wonder if you actually need this? If you're on Mac or Windows, you can use the special hostname host.docker.internal to refer to the Docker host. If you're on Linux, you can just look at your default gateway address inside the container (which will correspond to the address of the bridge device on the host to which the container is connected).
I have docker-compose.yml with such content:
version: '3'
services:
some_service:
build:
dockerfile: Dockerfile
ports:
- '8080:${PORT}'
And I have my codeship-steps.yml with:
- type: parallel
steps:
- service: some_service
command: printenv
Also, I have .env file with:
PORT=8080
And when I'm trying to run locally jet steps I getting an error:
strconv.ParseInt: parsing "${PORT}": invalid syntax
I'm trying to pass this env variable in different ways, but I have no success. Is it possible at all or .env variables with Codeship are only for application inside docker and not for configuration?
Environment variables are not available inside the configuration files.
That said, in most cases you also don't need to explicitly specify the external port for an exposed service. Especially in combination with parallel steps this can cause issues with multiple services trying to bind to the same port. Additionally, linked services will always be able to access the some_service service on port 8080.
I use docker-compose to describe the deployment of one of my application. The application is composed of a
mongodb database,
a nodejs application
a nginx front end the static file of nodejs.
If i scale the nodejs application, i would like nginx autoscale to the three application.
Recently i use the following code snippet :
https://gist.github.com/cmoore4/4659db35ec9432a70bca
This is based on the fact that some environment variable are created on link, and change when new server are present.
But now with the version 2 of the docker-compse file and the new link system of docker, the environment variable doesn't exist anymore.
How my nginx can now detect the scaling of my application ?
version: '2'
services:
nodejs:
build:
context: ./
dockerfile: Dockerfile.nodejs
image: docker.shadoware.org/passprotect-server:1.0.0
expose:
- 3000
links:
- mongodb
environment:
- MONGODB_HOST=mongodb://mongodb:27017/passprotect
- NODE_ENV=production
- DEBUG=App:*
nginx:
image: docker.shadoware.org/nginx:1.2
links:
- nodejs
environment:
- APPLICATION_HOST=nodejs
- APPLICATION_PORT=3000
mongodb:
image: docker.shadoware.org/database/mongodb:3.2.7
Documentation states here that:
Containers for the linked service will be reachable at a hostname identical to the alias, or the service name if no alias was specified.
So I believe that you could just set your services names in that nginx conf file like:
upstream myservice {
yourservice1;
yourservice2;
}
as they would be exported as host entries in /etc/hosts for each container.
But if you really want to have that host:port information as environment variables you could write a script to parse that docker-compose.yml and define an .env file, or doing it manually.
UPDATE:
You can get that port information from outside the container, this will return you the ports
docker inspect --format='{{range $p, $conf := .NetworkSettings.Ports}} {{$p}} -> {{(index $conf 0).HostPort}} {{end}}' your_container_id
But if you want to do it from the inside of a containers then what you want is a service discovery system like zookeeper
There's a long feature request thread in docker's repo, about that.
One workaround solution caught my attention. You could try building your own nginx image based on that.