In our solution we want to connect our Edge module to the servicebus of a host on a different network.
The dns server is not allowed (per design) to have the dns mapping to that host, hence I need to do the dns mapping in the hosts file of the Windows container that the Edge module is running in.
I have done some tests with the docker run and docker build commands, setting the --add-host parameter, but this doesn't seem to be supported in Windows containers. Looking at the file after the container has been started with that flag at least suggests that it is not.
Moreover I'm not sure I can use this since the Edge runtime is in control of the running of containers (please correct me if I'm wrong here).
In my desperation I tried to modify the hosts file through code, but got stopped due to administrative previledges not beeing met.
Anyways this feels like a hack and is not what one should have to do.
Is there an easier way to add a dns host mapping?
Assuming that you are using a windows base image - you could modify the hosts file there.
I did it with cmd, like this
docker exec --user "NT AUTHORITY\SYSTEM" -it <container> cmd
and when I got that shell I've run:
echo x.y.z.w hostname >> c:\windows\system32\drivers\etc\hosts
at first i've tried with powershell but all I got was some weird blank spaces between my text
Related
I have a server application (that I cannot change) that, when you connect as a client, will give you other URLs to interact with. Those URLs are also part of the same server so the URL advertised uses the hostname of a docker container.
We are running in a mixed economy (some docker containers, some regular applications). We actually need to set up where we have the server running as a docker application on a single VM, and that server will be accessed by non-docker clients (as well as docker clients not running on the same docker network).
So you have a server hostname (the docker container) and a docker hostname (the hostname of the VM running docker).
The client's initial connection is to: dockerhostname:1234 but when the server sends URLs to the client, it sends: serverhostname:5678 ... which is not resolvable by the client. So far, we've addressed this by adding "server hostname " to the client's /etc/hosts file but this is a pain to maintain.
I have also set the --hostname of the server docker container to the same name as the docker host and it has mostly worked but I've seen where a docker container running on the same docker network as the server had issues connecting to the server.
I realize this is not an ideal docker setup. We're migrating from a history of delivering as rpm's to delivering containers .. but it's a slow process. Our company has lots of applications.
I'm really curious if anyone has advice/lessons learned with this situation. What is the best solution to my URL problem? (I'm guessing it is the /etc/hosts we're already doing)
You can do port-mapping -p 8080:80
How you build and run your container?
With a shell command, dockerfile or yml file?
Check this:
docker port
Call this and it will work:
[SERVERIP][PORT FROM DOCKERHOST]
To work with hostnames you need DNS or use hosts file.
The hosts file solution is not a good idea, it's how the internet starts in the past ^^
If something change you have to change all hosts files on every client!
Or use a static ip for your container:
docker network ls
docker network create my-network
docker network create --subnet=172.18.0.0/16 mynet123
docker run --net mynet123 --ip 172.18.0.22 -it ubuntu bash
Assign static IP to Docker container
You're describing a situation that requires a ton of work. The shortest path to success is your "adding things to /etc/hosts file" process. You can use configuration management, like ansible/chef/puppet to only have to update one location and distribute it out.
But at that point, you should look into something called "service discovery." There are a ton of ways to skin this cat, but the short of it is this. You need some place (lazy mode is DNS) that stores a database of your different machines/services. When a machine needs to connect to another machine for a service, it asks that database. Hence the "service discovery" part.
Now implementing the database is the hardest part of this, there are a bunch of different ways, and you'll need to spend some time with your team to figure out what is the best way.
Normally running an internal DNS server like dnsmasq or bind should get you most of the way, but if you need something like consul that's a whole other conversation. There are a lot of options, and the best thing to do is research, and audit what you actually need for your situation.
Seemed like a fairly straightforward thing to do , I want to use an FTP client to copy files to and from a local docker container on a windows machine.
I am using a bitnami container ( Magento 2 , but please don't tag this post as magento as it's more of a docker question ) , and I prefer using a GUI Ftp client like Filezilla as opposed to using the command line.
How can I set this up? Or maybe I am missing something in regard to docker.
Thank you!
The problem is that an FTP client opens two random ports—one for control, and one for the actual data. Because of the way that Docker networks work, you cannot dynamically map those ports. The non-secure way to resolve this is to add a flag on the run command which eliminates the network isolation of the container.
docker run [other flags] --network host <image_name>
Technically, this changes the network driver that the container uses.
More info on this can be found in Docker's Networking using the host network tutorial.
Edit: Option was spelled with a single colon instead of two.
I'm getting in habit with rancher and docker and I'm now trying to figure out if it is possible to create multiple local custom hosts on the same physical machine. I'm running RancherOS in a local computer. Through the Rancher Web UI I'm able to create a local custom host and add containers to it.
When I try to add another local custom host copying the given command to the terminal (SSH into the rancher machine) it stars the process but nothing happen. The new host doesn't appear in the hosts list of the web interface and I don't receive any error from the terminal.
I couldn't get any useful information from the Rancher documentation about this possible issue.
I was wondering if it's not possible to have more than one custom virtual host on the same physical machine or if the command fails for some reason that I would like to know how to debug.
sudo docker run -e -d --privileged \
-v /var/run/docker.sock:/var/run/docker.sock rancher/agent:v0.8.2 \
http://192.168.1.150:8080/v1/projects/1a5/scripts/<registrationToken>
where registrationToken is replaced by the one provided by rancher.
There is nothing "virtual" about them. The agent talks to docker and manages one docker daemon, which is the entire machine. Running multiple does not make sense for a variety of reasons, such as when you type "docker run ..." on the machine, which agent is supposed to pick up that container? And they are not really isolated from each other regardless, because any of them can run privileged containers which can then do whatever they want that affects the others.
The only way to do what you're asking is to have actual virtual machines running on the physical machine, each with their own OS and docker daemon.
Another option might be to use linux containers to create separated environments, each having it's own ip address and running it's own docker daemon.
I created a customize Docker image based on ubuntu 14.04 with the Sensu-Client package inside.
Everything's went fine but now I'm wondering how can I trigger the checks to run from the hosts machine.
For example, I want to be able to check the processes that are running on the host machine and not only the ones running inside the container.
Thanks
It depends on what checks you want to run. A lot of system-level checks work fine if you run sensu container with --net=host and --privileged flags.
--net=host not just allows you to see the same hostname and IP as host system, but also all the tcp connections and interface metric will match for container and host.
--privileged gives container full access to system metrics like hdd, memory, cpu.
Tricky thing is checking external process metrics, as docker isolates it even from privileged container, but you can share host's root filesystem as docker volume ( -v /:/host) and patch check to use chroot or use /host/proc instead of /proc.
Long story short, some checks will just work, for others you need to patch or develop your own way, but sensu in docker is one possible way.
an unprivileged docker container cannot check processes outside of it's container because docker uses kernel namespaces to isolate it from all other processes running on the host. This is by design: docker security documentation
If you would like to run a super privileged docker container that has this namespace disabled you can run:
docker run -it --rm --privileged --pid=host alpine /bin/sh
Doing so removes an important security layer that docker provides and should be avoided if possible. Once in the container, try running ps auxf and you will see all processes on the host.
I don't think this is possible right now.
If the processes in the host instance are running inside docker, you can mount the socket and get the status from the sensu container
Add a sensu-client to the host machine? You might want to split it out so you have granulation between problems in the containers VS problems with your hosts
Else - You would have to set up some way to report from the inside - Either using something low level (system calls etc) or set up something from outside to catch the the calls and report back status.
HTHs
Most if not all sensu plugins hardcode the path to the proc files. One option is to mount the host proc files to a different path inside of the docker container and modify the sensu plugins to support this other location.
This is my base docker container that supports modifying the sensu plugins proc file location.
https://github.com/sstarcher/docker-sensu
We're starting to go down the containerization route with Docker and have created Docker versions of some of our infrastructure and applications.
Apigee is proving a little more of a struggle...we're doing a standalone install inside our Dockerfile and that works great. Once the install has finished and the container is started you can hit the UI and the management API just fine from the machine running the container.
The problem appears to be the virtualhost. Inside the container it is fine - if you enter the container (nsenter has been massively useful) you canthe run the /test/test1-sa.sh script no problems. From outside the container that virtualhost port is not accessible, even when you use the EXPOSE command inside your Dockerfile.
The only thing I maybe have to go on is the value for all the hostname entries inside our silent installation file. It is pointing to 127.0.0.1, which the Apigee docs seem to warn against.
Many thanks
Michael
Make sure you set your hostname to your external IP adress in /etc/hosts (as Docker runs on Ubuntu -- I believe it's in /etc/sysconfig/network if you're running CentOS). It should look something like this at a minimum:
127.0.0.1 localhost
172.56.12.67 MyApigeeInstance
Then running hostname -i should give you the outside ip address and the individual compoents will know how to find each other. Otherwise all components are being registered as 127.0.0.1 and the machines can't find each other.
You might also want to take a look at what ports are open for your docker image. The install doc for Apigee lists a TON of ports you need open for the various components.
I don't know if you have to do this as part of the docker image or if it there is a way to configure its underlying Ubuntu settings.