i have proxy setting for docker containers in $HOME/.docker/config.json
{
"proxies":
{
"default":
{
"httpProxy": "http://adress",
"httpsProxy": "http://adress",
"noProxy": ",10.225.226.0/24"
}
}
}
it works just fine with"old", written in python, docker-compose. but this new v2, written in go, seems to ignore this file. i.e.
docker-compose build is working, but new docker compose build gives me error from yum (from inside the container) that it cannot connect to network. tried to google it, but everything is still about old version of docker-compose, or about docker-compose file format. am i missing something? is there a new config file, or some options to turn on? i know i can set ENV HTTPS_PROXY in Dockerfile or docker-compose.yml, but i don't want to make them dependent on building environment
The fix for this was merged in a month ago, so you should see it working correctly with an upgrade to 2.0.0 or newer.
Related
I have a problem where I need to run some containers with a proxy applied to them in one project, but I can't run docker with a proxy in another project because some containers there conflict with this proxy (not sure why).
What I've added to my docker "config.json":
"proxies": {
"default": {
"httpProxy": "http://host:port/",
"httpsProxy": "http://host:port/"
}
}
I'm aware that this configuration allows me to add a "noProxy" attribute, but what exactly do I need to add there?
Are there any specific proxy profiles that I can add and switch on and off as needed since there is a "default" under proxies?
I'm using docker compose up to create those containers. Is there anything else I can configure in my docker-compose.yml file to make the command run with a specific proxy (?) or even a flag/env?
If necessary, I could add or remove this configuration, but that wouldn't solve the issue if I needed to run both projects together.
for the development of my Python project I have setup a Remote Development Container. The project uses, for example, MariaDB and RabbitMQ. Until recently I built and started containers for those services outside of VSCode. A few days ago, I reworked the project using Docker Compose so that I can manage the other containers using the remarkable Docker extension (Id: ms-azuretools.vscode-docker, Version: 1.22.0). That is working fine, besides one issue I cannot figure out:
I can start all containers using compose up, however, the Python project Remote Development Container is not staying up. Currently, I open the project folder in a second VSCode window and use the "Reopen in Container" command.
However, it would be nice if the Python project container is staying up and I could just use the "Attach Visual Studio Code" command from the Docker extension Containers menu.
I am wondering, if there is something I can add to the .devcontainer.json or some other configuration file to realize this scenario?
Any help is much appreciated!
If it helps I can post the docker-compose.yml, Dockerfile's or the .devcontainer.json, please let me know what is required.
After upgrading my docker desktop, I get an error when running docker-compose up. My usual setup consists of microservices controlled with the docker-compose command. When I run docker-compose up, all the containers are started. After updating my docker desktop, I get:
Can't separate key from value
while running docker-compose up. How do I fix this?
Check for the version number of docker and if its 3.4+ then the docker compose v2 is enabled by default. To disable it, go to > docker desktop > preferences > experimental features > un-check "use Docker Compose V2" option. This is a move by docker hub to incorporate docker-compose as docker compose and may cause problems to your usual workflow. Enjoy :)
Just in case if anyone else (like me) run into this issue:
Check the local (to docker-compose.yaml) .env file, is there a VARIABLE without mapping? If so... remove it or give it a value...
More specifically maybe:
MY_VAR= // works fine
MY_VAR2 // fails
; MY VAR // also fails
; MY_VAR= // works, but fails later with an actually useful msg
I am developing Azure IoT Edge modules using Visual Studio 2019, on Win 10. To actually build the modules, a docker container is created for each build of each module. I'm trying to change the IP address of the container created, because it currently defaults to a 172.x address that is in the public range. I need it to be a private IP, so our Symantec AV software won't think requests coming from it are external. The dotnet restore is failing because SEP is blocking access to the web to get the index file.
I have tried the following:
Set "bip" in my global daemon.json file, like this:
{
"registry-mirrors": [],
"insecure-registries": [],
"debug": true,
"experimental": false,
"dns": ["8.8.8.8"],
"bip": "192.168.1.5/24"
}
This doesn't work because after adding that last "bip" line, Docker Desktop restart fails, and further starts fail, with a timeout when starting up the back end. The error logs for this are not helpful.
Edit the module.json file in the solution, like so:
"buildOptions": [ "--build-arg bip=192.168.1.5/24" ],
This seems to have absolutely no effect. The IP address in the container remains in the 172.x space.
Any idea how I can customize the IP address for my docker build containers?
Thanks for your help.
For some reason, the second method in my question started working, after a few reboots. I'm not sure why it didn't work at first, but for now all is working.
I am trying to create local development environment using Docker Compose. I started using this example https://github.com/b00giZm/docker-compose-nodejs-examples/tree/master/03-express-gulp-watch and it is working like a charm. No problem there.
However the structure in that example is too simple and doesn't fit to my needs. I am planning to run my application with coreos on production, so I need a bunch of other config files also. This is roughly how I CHANGED the example above:
application
app
bin
public
routes
views
app.js
gulpfile.js
package.json
vm
coreos (production configs here)
docker (development configs here)
app
Dockerfile
docker-compose.yml
Dockerfile for actual application in there, because I would like to use separate dockerfiles for production and development use.
I also changed my docker-compose.yml to this:
web:
build: app
volumes:
- "../../app:/src/app"
ports:
- "3030:3000"
- "35729:35729"
After this "docker-compose build" goes ok, but "docker-compose up" doesn't. I get an error saying, that gulpfile cant be found. In my logic this is because of volume mounts, they don't work with parent directories as I assume.
Any idea what I am doing wrong? Or I you have working example for this situation, please share it.
Your are probably hitting the issue of using volumes too early and trying to access them in cascading docker images.
See this:
https://github.com/docker/docker/issues/3639
dnephin was right. Removing old containers did the trick. Thanks!