I have cloned the following project
https://github.com/sequenceiq/docker-ambari.
I have successfully managed to make the 3 ambari-docker containers and now i am trying to select an HDP version through the Ambari UI.
My problem is that each time it tries to get a public repo the request returns with a 400 code(could not access base url).
I tried to curl a repo through the ambari-server container but it returns with could not resolve host.
I am running this inside a VM(Ubuntu 18.04) behind a company firewall.
I have no problem with curl inside the VM but it does not work in the container.
I have already tried whatever i could find on proxy editing for docker,ambari,yum,etc.. and since i am new to this i don't know what else to look for.
I expect to be able to choose a public repo to continue with the cluster installation wizard
For Ambari to communicate during setup with the hosts it deploys to and manages, certain ports must be open and available. The easiest way to do this is to temporarily disable iptables, as follows:
systemctl disable firewalld
service firewalld stop
Related
I'm trying to pull an image from a server with multiple proxies.
Setting a proper proxy depends on which zone the machine is trying to docker pull from.
For the record, adding the one relevant proxy in /etc/systemd/system/docker.service.conf/http-proxy.conf of the machine which is pulling the image, works fine.
But the container is supposed to be downloaded on multiple zones, which require different proxies based on where the machine is.
I tried two things:
Passed the list of proxies in the http-proxy.conf, like this:
[Service]
Environment="HTTP_PROXY=http://proxy_1:port/,http://proxy_2:port/"
Environment="HTTPS_PROXY=http://proxy_1:port/,http://proxy_2:port/"
Environment="NO_PROXY=localhost"
Some machines require http://proxy_1:port/, which work fine.
But on a machine that requires http://proxy_2:port/ to pull; it does not work, meaning, Docker does not fallback to another proxy to try. It returns this error:
Error response from daemon: Get HTTP:<ip>:<proxy_1> proxyconnect tcp: dial tcp <ip>:<proxy_1>: connect: no route to host
Ofcourse if I were to provide only the second working proxy to the configuration, it will work.
Passing proxy as a parameter to docker pull, like in docker build/run but that is not supported as per the documentation.
I am looking for a way to set-up proxies in such a way that either
Docker falls back to trying other provided alternate proxies
OR
I can provide proxy dynamically at the time of pull. (This will be part of an automated process which determines relevant proxy to pass.)
I do not want to constantly change the http-proxy file and restart docker for obvious reasons.
What are my options?
If you're using a sufficiently recent docker (i.e. 17.07 and higher) you can have this configuration on the client side. Refer to the official documentation for details on the configuration.
You still need to have multiple configuration files for the various proxy configuration you need, but you can switch them without the need to restart the docker daemon.
In order to do something similar (not exactly related to proxy) I use a shell script that wraps the invocation of docker client pointing to a custom configuration file via the --config option.
If I run Docker (Docker for Desktop, 2.0.0.3 on Windows 10), then access to internal infrastructure and containers is fine. I can easily do
docker pull internal.registry:5005/container:latest
But ones I enable Kubernetes there, I completely lose an access to internal infrastructure and [Errno 113] Host is unreachable in Kubernetes itself or connect: no route to host from Docker appears.
I have tried several ways, including switching of NAT from DockerNAT to Default Switch. That one doesn't work without restart and restart changes it back to DockerNAT, so, no luck here. This option also seems not to work.
let's start from the basics form the official documentation:
Please make sure you meet all the prerequisites and all other instructions were met.
Also you can use this guide. It has more info with details pointing to what might have gone wrong in your case.
If the above won't help, there are few other things to consider:
In case you are using a virtual machine, make sure that the IP you are referring to is the one of the docker-engines’ host and not the one on which the client is running.
Try to add tmpnginx in docker-compose.
Try to delete the pki directory in C:\programdata\DockerDesktop (first stop Docker, delete the dir and than start Docker). The directory will be recreated and k8s-app=kube-dns labels should work fine.
Please let me know if that helped.
i am using the latest version of docker for windows. the linux container goes smoothly but i am getting below problem
wsarecv: An existing connection was forcibly closed by the remote host.
it occurs for fetching some specific image from repos. In my case i am fetching microsoft/aspnet. i have created a docker file and trying to built my custom image.I have followed the repository instruction for creating a docker file.
the picture is given below
after this state i am getting this forcibly closed by remote host error.
my dockerfile content is
FROM microsoft/aspnet:4.7
ARG site_root=.
ADD ${site_root} /inetpub/wwwroot
I am not sure exactly why this one worked, as I was trying to do a pull of a couple microsoft images. But in Settings > General > Expose daemon on tcp://localhost:2375 without TLS, worked for me. Following that I reverted the change but nice to have that on in the back-pocket. Might be related to firewall settings in Windows. I am using Win 10 Professional.
I had been consistently encountering this error from inside a corporate network. We added mcr.microsoft.com to a firewall white-list, and everything worked as intended.
To Debug:
Check the blocked connections.
Try unblock internet on the machine, before you whitelist the urls one by one.
Allow the below urls- from windows firewall, any corp proxies, corp firewall
"*.docker.io"
"*.docker.com"
"*.microsoft.com" - windows update dependencies for windows containers
"*.mscr.io" -again for microsoft container registries
Worked in my case. Could be more to whitelist, depending on what you are trying to pull.
Trying to get a private repo running on my EC2 instance so my other docker hosts created by docker-machine can pull from the private repo. I've disabled SSL and have put up a firewall to compensate that allows my test server(the one I'm trying to pull on) to connect to my main EC2 instance (the private repo). So far I can push to the private repo where it's hosted on my main EC2 instance (was getting an EOF error before disabling SSL) but I get the following error when I run this on my text server:
docker pull ec2-xx-xx-xxx-xxx.us-west-2.compute.amazonaws.com:5000/scoredeploy
this is the error it spits out:
Error response from daemon: Get https://ec2-xx-xx-xxx-xxx.us-west-2.compute.amazonaws.com:5000/v1/_ping: EOF
Googling this error on yields results of people having similar issues, but without any fixes.
Anybody have any idea of what's going on here?
You might need to set the --insecure-registry <registry-ip>:5000 flag on the docker daemon's startup command on your non-docker-registry machine. In your case: --insecure-registry ec2-xx-xx-xxx-xxx.us-west-2.compute.amazonaws.com:5000
If you want to use your already-running docker machine, this should help you out setting the flag: https://docs.docker.com/registry/insecure/#/deploying-a-plain-http-registry
If you're using boot2docker, the file location and format is slightly different. Give this a shot if this is the case: http://www.developmentalmadness.com/2016/03/09/docker-configure-insecure-registry-in-boot2docker/
I've had issues with my docker machines not saving this setting on reboots. If you run into that issue, I'd recommend you make a new machine including the flag --engine-insecure-registry <registry-ip>:5000 in the docker-machine create command.
Best of luck!
I want to build a "centralized" development environment using docker for my development team (4 PHP developers)
I have one big Linux server (lot of RAM, Disk, CPU) that runs the containers.
All developers have an account on this linux server (a home directory) where they put (git clone) the projects source code. Locally (on their desktop machine) they have access to their home directory via a network share.
I want that all developers are able to work at the same time on the same projects, but viewing the result of their code editing in different containers (or set of containers for project who use linking containers)
The docker PHP development environment by itself is not a problem. I already tried something like that with success : http://geoffrey.io/a-php-development-environment-with-docker.html
I can use fig, with a fig.yml at the root of each project source code, so each developer can do a fig up to launch the set of containers for a given project. I can even use a different FIG_PROJECT_NAME environment variable for each account so I suppose that 2 developer can fig up the same project and their will be no container names collisions
Does it make sense ?
But after, I don't really know how to dynamically giving access to the running containers : when running there will be typically a web server in a container mapped to a random port in the host. How can I setup a sort of "dynamic DNS" to point to the running container(s), accessible, let say, through a nginx reverse proxy (the vhost creation and destroy has to be dynamic too) ?
To summarize, the workflow I would like to have :
A developer ssh into the dev env (the big linux server).
from his home directory he goes into the project directory and do a fig up
a vhost is created in the nginx reverse proxy, pointing to the running container and a DNS entry (or /etc/hosts entry) is added that is the server_name of this previously generated vhost.
The source code is mounted into the container from a host directory (-v host/dir:container/dir, so the developer can edit any file while the container is running
The result can be viewed by accessing the vhost, for example :
randomly-generated-id.dev.example.org
when the changes are ok, the developper can do a git commit/push
then the dev do a fig stop which in turn delete the nginx reverse proxy corresponding vhost and also delete the dynamic DNS entry.
So, how would to do a setup like this ? I mentioned tool like fig but if you have any other suggestions ... but remember that I would like to keep a lightweight workflow (after all we are a small team :))
Thanks for your help.
Does it make sense ?
yes, that setup makes sense
I would suggest taking a look at one of these projects:
https://github.com/crosbymichael/skydock
https://github.com/progrium/registrator
https://github.com/bnfinet/docker-dns
They're all designed to create DNS entries for containers as they start. Then just point your DNS server at it and you should get a nice domain name every time someone starts up an environment (I don't think you'll need a nginx proxy). But you might also be interested in this approach: http://jasonwilder.com/blog/2014/03/25/automated-nginx-reverse-proxy-for-docker/
Now, there's an even better option for you: Traefik. It will act as a reverse proxy, listening on 80/443, and will differentiate by hostname. Then, it will forward traffic dynamically, based on labels applied to the containers.
Here is a good solution to your issue:
1) Setup Traefik to listen to the docker daemon, forwarding based on ports
2) Ensure the frontend app servers for your devs are on the same docker network as traefik
3) Set a wildcard dns entry point to your server. For example: *.localdev.example.com.
4) On each container, set the hostname in that wildcard namespace. For example: jsmith-dev1localdev.example.com. This would be specified in a docker label such as: traefik.frontend.rule=Host:jsmith-dev1localdev.example.com.
This would allow developers to dynamically forward traffic by domain to their own dev containers.
Yes, I'm aware this is a 3 year old question. It still comes up in 2018 first on google for "centralized docker development server" so I'm going to post this anyways for the help of those currently searching.