How to use FTP Client with Docker containers - docker

Seemed like a fairly straightforward thing to do , I want to use an FTP client to copy files to and from a local docker container on a windows machine.
I am using a bitnami container ( Magento 2 , but please don't tag this post as magento as it's more of a docker question ) , and I prefer using a GUI Ftp client like Filezilla as opposed to using the command line.
How can I set this up? Or maybe I am missing something in regard to docker.
Thank you!

The problem is that an FTP client opens two random ports—one for control, and one for the actual data. Because of the way that Docker networks work, you cannot dynamically map those ports. The non-secure way to resolve this is to add a flag on the run command which eliminates the network isolation of the container.
docker run [other flags] --network host <image_name>
Technically, this changes the network driver that the container uses.
More info on this can be found in Docker's Networking using the host network tutorial.
Edit: Option was spelled with a single colon instead of two.

Related

Can (Should) I Run a Docker Container with Same host name as the Docker Host?

I have a server application (that I cannot change) that, when you connect as a client, will give you other URLs to interact with. Those URLs are also part of the same server so the URL advertised uses the hostname of a docker container.
We are running in a mixed economy (some docker containers, some regular applications). We actually need to set up where we have the server running as a docker application on a single VM, and that server will be accessed by non-docker clients (as well as docker clients not running on the same docker network).
So you have a server hostname (the docker container) and a docker hostname (the hostname of the VM running docker).
The client's initial connection is to: dockerhostname:1234 but when the server sends URLs to the client, it sends: serverhostname:5678 ... which is not resolvable by the client. So far, we've addressed this by adding "server hostname " to the client's /etc/hosts file but this is a pain to maintain.
I have also set the --hostname of the server docker container to the same name as the docker host and it has mostly worked but I've seen where a docker container running on the same docker network as the server had issues connecting to the server.
I realize this is not an ideal docker setup. We're migrating from a history of delivering as rpm's to delivering containers .. but it's a slow process. Our company has lots of applications.
I'm really curious if anyone has advice/lessons learned with this situation. What is the best solution to my URL problem? (I'm guessing it is the /etc/hosts we're already doing)
You can do port-mapping -p 8080:80
How you build and run your container?
With a shell command, dockerfile or yml file?
Check this:
docker port
Call this and it will work:
[SERVERIP][PORT FROM DOCKERHOST]
To work with hostnames you need DNS or use hosts file.
The hosts file solution is not a good idea, it's how the internet starts in the past ^^
If something change you have to change all hosts files on every client!
Or use a static ip for your container:
docker network ls
docker network create my-network
docker network create --subnet=172.18.0.0/16 mynet123
docker run --net mynet123 --ip 172.18.0.22 -it ubuntu bash
Assign static IP to Docker container
You're describing a situation that requires a ton of work. The shortest path to success is your "adding things to /etc/hosts file" process. You can use configuration management, like ansible/chef/puppet to only have to update one location and distribute it out.
But at that point, you should look into something called "service discovery." There are a ton of ways to skin this cat, but the short of it is this. You need some place (lazy mode is DNS) that stores a database of your different machines/services. When a machine needs to connect to another machine for a service, it asks that database. Hence the "service discovery" part.
Now implementing the database is the hardest part of this, there are a bunch of different ways, and you'll need to spend some time with your team to figure out what is the best way.
Normally running an internal DNS server like dnsmasq or bind should get you most of the way, but if you need something like consul that's a whole other conversation. There are a lot of options, and the best thing to do is research, and audit what you actually need for your situation.

Use docker image in another docker image

I have two docker images:
CLI tool
Webserver
The CLI tool is a very heavy docker file which takes hours to compile. I am trying to call the CLI tool from the webserver, but not sure how to go from here. Is there a way to make the command created in 1 available in 2?
At this point I tried working with volumes, but no luck. Thanks!
The design of Docker sort-of assumes that containers communicate through a network, not through the command line. So the cleanest solution is to create a simple microservice that wraps the CLI tool and can be called through HTTP.
As a quick and dirty hack, you could also use sshd as such a microservice without writing any code.
An alternative that doesn't involve the network is to make the socket of the Docker daemon available in the webserver container using a bind mount:
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
Then you should be able to communicate with the host daemon from within the container, provided that you have installed the docker command line tool in the image. However, note that this makes your application strongly dependent on Docker, which might not be ideal. Also note that it essentially gives the container root access to the host system!
(Note that this is different from Docker-in-Docker, which is running a second Docker daemon inside a container and is generally not recommended except for specialized use cases.)

Docker Swarm - Unable to connect to other containers because IP lookup fails

Say we provision an overlay network using docker swarm and create various containers with following names:
Alice
Bob
Larry
John
now if we try to ping any container from another it fails because it does not know how to do the IP lookup i.e., alice does not know bob's IP and so on. We have been taking care of this by manually editing the /etc/hosts on every container and entering the name/IP key value pair in that file but this is becoming very tedious with every restart of our network. There ought to be a better way of handling this.
E.g., services created using docker stack do not suffer from this problem. Due to various reasons we are stuck with creating containers using the vanilla docker create. How can we make containers discover each other on the overlay network without manual labor of editing /etc/hosts?
Below is detailed workflow we currently have to follow:
we first provision a docker swarm and overlay network
Then for each container, we create it using the docker create command and then start it using docker start command. we use the --network flag to attach the container to the overlay network at time of creation
We then use docker container inspect to get the IP address of each container. This involves running n commands and noting down IP address.
Then we log into each container and edit the /etc/hosts file by hand and enter the (name, IP) key-value pair of the other containers. So this means having to enter n*(n-1) records by hand when summed across containers.
Not sure why docker create does not do all this automatically - docker already knows (or can know) all the IP addresses. Containers provisioned using docker stack e.g., do not have to go through this manual process to "discover" each other. The reason why we cannot use docker stack is because:
it does not allow us to specify container name
we run various commands (mostly docker cp) before starting the container and not possible to do this using stack
You might have seen this already: DNS on User defines networks
Have you created your services like in the section „Attach a service to an overlay“ in this doc?
It seems that the only thing that is needed is to refer the containers by their {name}.{network} instead of just the {name}. No need to edit /etc/hosts or use the --add-host flag or run some additional dns server. Refer https://forums.docker.com/t/need-help-connecting-containers-in-swarm-mode/77944/6
Further details: the official documentation for docker does not mention anywhere the necessity to add .{network} suffix to the {containername}. Indeed on this link, Step #7 under the Walk-through, there is no .{network} suffix used. So not sure why we need to do that. The version of docker we are using is 18.06.1-ce for linux.
I had a similar issue : I'm following this official tutorial to create a docker swarm overlay network on two Raspberry pi 3 and the ping was impossible unless I found on Github the answer : as I understood, it seems that latest version of alpine (for a reason that I ignore) is not suitable for Raspberry pi 3 so the solution would be the use of the version 3.12.3 like this : sudo docker run -dit --name alpine1 --network test1 alpine:3.12.3
Hope that this might help someone :)

Easy, straightforward, robust way to make host port available to Docker container?

It is really easy to mount directories into a docker container. How can I just as easily "mount a port into" a docker container?
Example:
I have a MySQL server running on my local machine. To connect to it from a docker container I can mount the mysql.sock socket file into the container. But let's say for some reason (like intending to run a MySQL slave instance) I cannot use mysql.sock to connect and need to use TCP.
How can I accomplish this most easily?
Things to consider:
I may be running Docker natively if I'm using Linux, but I may also be running it in a VM if I'm on Mac or Windows, through Docker Machine or Docker for Mac/Windows (Beta). The answer should handle both scenarios seamlessly, without me as the user having to decide which solution is right depending on my specific Docker setup.
Simply assigning the container to the host network is often not an option, so that's unfortunately not a proper solution.
Potential solution directions:
1) I understand that setting up proper local DNS and making the Docker container (network) talk to it might be a proper, robust solution. If there is such a DNS service that can be set up with 1, max 2 commands and then "just work", that might be something.
2) Essentially what's needed here is that something will listen on a port inside the container and like a sort of proxy route traffic between the TCP/IP participants. There's been discussion on this closed Docker GH issue that shows some ip route command-line magic, but that's a bit too much of a requirement for many people, myself included. But if there was something akin to this that was fully automated while understanding Docker and, again, possible to get up and running with 1-2 commands, that'd be an acceptable solution.
I think you can run your container with --net=host option. In this case container will bind to the host's network and will be able to access all the ports on your local machine.

Can a docker process access programms on the host with ipc

I´m working on a clustered tomcat system that uses MQSeries.
Today MQSeries is accessed in bindings mode, i.e. via IPC and tomcat and mqeries run on the same host without any virtualization/docker support.
I´d like to transform that to a solution, where mqseries runs on the host (or possible in a docker container) the the tomcat instances run in docker containers.
It´s possible to access mqseries in client mode (via a tcp connection) and this seems to be the right solution.
Would it still be possible to access mqseries from the docker container via ipc, i.e. create exceptions for the ipc namespace separation? Is anything like that planned for docker?
Since docker 1.5 this is possible with the flag --ipc=host like in
docker run --ipc=host ubuntu bash
This answer suggests how IPC can be enabled with a source-code modification to Docker. As far as I (and the other answers there) know, there is no built-in feature.
Specificically, he says he commented out this line which makes Docker create a separate IPC namespace.
Rebuilding Docker is a bit tedious because it brings in dozens of other things during the build, but if you follow the instructions it's straightforward.

Resources