I have a problem where I need to run some containers with a proxy applied to them in one project, but I can't run docker with a proxy in another project because some containers there conflict with this proxy (not sure why).
What I've added to my docker "config.json":
"proxies": {
"default": {
"httpProxy": "http://host:port/",
"httpsProxy": "http://host:port/"
}
}
I'm aware that this configuration allows me to add a "noProxy" attribute, but what exactly do I need to add there?
Are there any specific proxy profiles that I can add and switch on and off as needed since there is a "default" under proxies?
I'm using docker compose up to create those containers. Is there anything else I can configure in my docker-compose.yml file to make the command run with a specific proxy (?) or even a flag/env?
If necessary, I could add or remove this configuration, but that wouldn't solve the issue if I needed to run both projects together.
Related
i have proxy setting for docker containers in $HOME/.docker/config.json
{
"proxies":
{
"default":
{
"httpProxy": "http://adress",
"httpsProxy": "http://adress",
"noProxy": ",10.225.226.0/24"
}
}
}
it works just fine with"old", written in python, docker-compose. but this new v2, written in go, seems to ignore this file. i.e.
docker-compose build is working, but new docker compose build gives me error from yum (from inside the container) that it cannot connect to network. tried to google it, but everything is still about old version of docker-compose, or about docker-compose file format. am i missing something? is there a new config file, or some options to turn on? i know i can set ENV HTTPS_PROXY in Dockerfile or docker-compose.yml, but i don't want to make them dependent on building environment
The fix for this was merged in a month ago, so you should see it working correctly with an upgrade to 2.0.0 or newer.
I have two different Node.JS Projects I created in VSCode. On occasion, project A needs to make a call into project B. Project A is running on port 60100. Project B is running on port 60200. When i try to call Project B using http://localhost:60200/ I get a transport error. If i expose port 60200 on both container configs (devcontainer.json) it will throw an error because the port is already in use.
I know i could use a docker-compose and run them in the same project, but they have separate git homes and are standalone most of the time.
Is there anything I can do to make them connected? Maybe use a docker-compose for each separate, but use the same network name in the compose? Would that let them communicate?
Here's what I do (although I use it to get my devcontainer to join a docker-compose network, but the same principles should apply)...
In my devcontainer.json...
"initializeCommand": "docker network inspect my_shared_network > /dev/null || docker network create my_shared_network --attachable",
"runArgs": [
"--network=my_shared_network",
],
This creates a named network if it's not there already, and instructs VSCode to use it for the devcontainer.
You may want to add, e.g., "--hostname=devcontainer-project-b" to the runArgs so that you can use that name in your URL.
Project A may call other project (Project B) running in different VSCode:
http://workspace:60200
// Project B .devcontainer/devcontainer.json
{
"name": "Go",
"service": "workspace", // <-- workspace is in this example, but could be any other value
...
This workspace (as example) is actual network alias visible from:
docker inspect project_B_devcontainer_workspace_1
or using VSCode Remote explorer in Activity Bar -> inspect option of container...
If I run Docker (Docker for Desktop, 2.0.0.3 on Windows 10), then access to internal infrastructure and containers is fine. I can easily do
docker pull internal.registry:5005/container:latest
But ones I enable Kubernetes there, I completely lose an access to internal infrastructure and [Errno 113] Host is unreachable in Kubernetes itself or connect: no route to host from Docker appears.
I have tried several ways, including switching of NAT from DockerNAT to Default Switch. That one doesn't work without restart and restart changes it back to DockerNAT, so, no luck here. This option also seems not to work.
let's start from the basics form the official documentation:
Please make sure you meet all the prerequisites and all other instructions were met.
Also you can use this guide. It has more info with details pointing to what might have gone wrong in your case.
If the above won't help, there are few other things to consider:
In case you are using a virtual machine, make sure that the IP you are referring to is the one of the docker-engines’ host and not the one on which the client is running.
Try to add tmpnginx in docker-compose.
Try to delete the pki directory in C:\programdata\DockerDesktop (first stop Docker, delete the dir and than start Docker). The directory will be recreated and k8s-app=kube-dns labels should work fine.
Please let me know if that helped.
Let's assume scenario I'm using a set of CLI docker run commands for creating a whole environment of containers, networks (bridge type in my case) and connect containers to particular networks.
Everything works well till the moment I want to have only one such environment at a single machine.
But what if I want to have at the same machine a similar environment to the one I've just created but for a different purpose (testing) I'm having an issue of name collisions since I can't crate and start containers and networks with the same name.
So far I tried to start second environment the same way I did with the first but with prefixing all containers and networks names.That worked but had a flaw: in the application that run all requests to URIs were broken since they had a structure
<scheme>://<container-name>:<port-number>
and the application was not able to reach <prefix-container-name>.
What I want to achieve is to have an exact copy of the first environment running on the same machine as the second environment that I could use to perform the application tests etc.
Is there any concept of namespaces or something similar to it in Docker?
A command that I could use before all docker run etc commands I use to create environment and have just two bash scripts that differ only by the namespace command at their beginning?
Can using virtual machine, ie Oracle Virtualbox be the solution to my problem? Create a VM for the second environment? isn't that an overkill, will it add an additional set of troubles?
Perhaps there is a kind of --hostname for docker run command that will allow to access the container from other container by using this name? Unlucky --hostname only gives ability to access the container by this name form the container itself but not from any other. Perhaps there is an option or command that can make an alias, virtual host or whatever magic common name I could put into apps URIs <scheme>://<magic-name>:<port-number> so creating second environment with different containers and networks names will cause no problem as long as that magic-name is available in the environment network
My need for having exact copy of the environment is because of tests I want to run and check if they fail also on dependency level, I think this is quite simple scenario from the continues integration process. Are there any dedicated open source solutions to what I want to achieve? I don't use docker composer but bash script with all docker cli commands to get the whole env up and running.
Thank you for your help.
Is there any concept of namespaces or something similar to it in Docker?
Not really, no (but keep reading).
Can using virtual machine [...] be the solution to my problem? ... Isn't that an overkill, will it add an additional set of troubles?
That's a pretty reasonable solution. That's especially true if you want to further automate the deployment: you should be able to simulate starting up a clean VM and then running your provisioning script on it, then transplant that into your real production environment. Vagrant is a pretty typical tool for trying this out. The biggest issue will be network connectivity to reach the individual VMs, and that's not that big a deal.
Perhaps there is a kind of --hostname for docker run command that will allow to access the container from other container by using this name?
docker run --network-alias is very briefly mentioned in the docker run documentation and has this effect. docker network connect --alias is slightly more documented and affects a container that's already been created.
Are there any dedicated open source solutions to what I want to achieve?
Docker Compose mostly manages this for you, if you want to move off of your existing shell-script solution: it puts a name prefix on all of the networks and volumes it creates, and creates network aliases for each container matching its name in the YAML file. If your host volume mounts are relative to the current directory then that content is fairly isolated too. The one thing you can't easily do is launch each copy of the stack on a separate host port(s), so you have to resolve those conflicts.
Kubernetes has a concept of a namespace which is in fact exactly what you're asking for, but adopting it is a substantial investment and would involve rewriting your deployment sequence even more than Docker Compose would.
Recently I was trying to figure out how a docker workflow looks like.
What I thought is, devs should push images locally and in other environments servers should just directly pull that image and run it.
But I could see a lot of public images allows people to put configurations outside the container.
For example, in official elasticsearch image, there is a command as follows:
docker run -d -v "$PWD/config":/usr/share/elasticsearch/config elasticsearch
So what is the point of putting configuration outside the container instead of running local containers quickly?
My argument is
if I put configuration inside a custom image, in testing environment or production, the server just need to pull that same image which is already built.
if I put configuration outside the image, in other environments, there will be another process to get that configuration from somewhere. Sure we could use git to source control that, but is this a tedious and useless effort to manage it? And installing third party libraries is also required.
Further question:
Should I put the application file (for example, war file) inside web server container or outside it?
When you are doing development, configuration files may change often; so rather than keep rebuilding the containers, you may use a volume instead.
If you are in production and need dozens or hundreds of the same container, all with slightly different configuration files, it is easy to have one single image and have diverse configuration files living outside (e.g. use consul, etcd, zookeeper, ... or VOLUME).