We're starting to go down the containerization route with Docker and have created Docker versions of some of our infrastructure and applications.
Apigee is proving a little more of a struggle...we're doing a standalone install inside our Dockerfile and that works great. Once the install has finished and the container is started you can hit the UI and the management API just fine from the machine running the container.
The problem appears to be the virtualhost. Inside the container it is fine - if you enter the container (nsenter has been massively useful) you canthe run the /test/test1-sa.sh script no problems. From outside the container that virtualhost port is not accessible, even when you use the EXPOSE command inside your Dockerfile.
The only thing I maybe have to go on is the value for all the hostname entries inside our silent installation file. It is pointing to 127.0.0.1, which the Apigee docs seem to warn against.
Many thanks
Michael
Make sure you set your hostname to your external IP adress in /etc/hosts (as Docker runs on Ubuntu -- I believe it's in /etc/sysconfig/network if you're running CentOS). It should look something like this at a minimum:
127.0.0.1 localhost
172.56.12.67 MyApigeeInstance
Then running hostname -i should give you the outside ip address and the individual compoents will know how to find each other. Otherwise all components are being registered as 127.0.0.1 and the machines can't find each other.
You might also want to take a look at what ports are open for your docker image. The install doc for Apigee lists a TON of ports you need open for the various components.
I don't know if you have to do this as part of the docker image or if it there is a way to configure its underlying Ubuntu settings.
Related
I have a server application (that I cannot change) that, when you connect as a client, will give you other URLs to interact with. Those URLs are also part of the same server so the URL advertised uses the hostname of a docker container.
We are running in a mixed economy (some docker containers, some regular applications). We actually need to set up where we have the server running as a docker application on a single VM, and that server will be accessed by non-docker clients (as well as docker clients not running on the same docker network).
So you have a server hostname (the docker container) and a docker hostname (the hostname of the VM running docker).
The client's initial connection is to: dockerhostname:1234 but when the server sends URLs to the client, it sends: serverhostname:5678 ... which is not resolvable by the client. So far, we've addressed this by adding "server hostname " to the client's /etc/hosts file but this is a pain to maintain.
I have also set the --hostname of the server docker container to the same name as the docker host and it has mostly worked but I've seen where a docker container running on the same docker network as the server had issues connecting to the server.
I realize this is not an ideal docker setup. We're migrating from a history of delivering as rpm's to delivering containers .. but it's a slow process. Our company has lots of applications.
I'm really curious if anyone has advice/lessons learned with this situation. What is the best solution to my URL problem? (I'm guessing it is the /etc/hosts we're already doing)
You can do port-mapping -p 8080:80
How you build and run your container?
With a shell command, dockerfile or yml file?
Check this:
docker port
Call this and it will work:
[SERVERIP][PORT FROM DOCKERHOST]
To work with hostnames you need DNS or use hosts file.
The hosts file solution is not a good idea, it's how the internet starts in the past ^^
If something change you have to change all hosts files on every client!
Or use a static ip for your container:
docker network ls
docker network create my-network
docker network create --subnet=172.18.0.0/16 mynet123
docker run --net mynet123 --ip 172.18.0.22 -it ubuntu bash
Assign static IP to Docker container
You're describing a situation that requires a ton of work. The shortest path to success is your "adding things to /etc/hosts file" process. You can use configuration management, like ansible/chef/puppet to only have to update one location and distribute it out.
But at that point, you should look into something called "service discovery." There are a ton of ways to skin this cat, but the short of it is this. You need some place (lazy mode is DNS) that stores a database of your different machines/services. When a machine needs to connect to another machine for a service, it asks that database. Hence the "service discovery" part.
Now implementing the database is the hardest part of this, there are a bunch of different ways, and you'll need to spend some time with your team to figure out what is the best way.
Normally running an internal DNS server like dnsmasq or bind should get you most of the way, but if you need something like consul that's a whole other conversation. There are a lot of options, and the best thing to do is research, and audit what you actually need for your situation.
Summary
I'm on mac. I have several docker containers, I can run all of them using docker-compose up and everything works as expected: for instance, I can access my backend container by searching http://localhost:8882/ on my browser, since port 8882 is mapped to the same port on host by using:
ports:
- "8882:8882"
Problems start when I try to attach an IDE to the backend container so as to be able to develop "from inside" that container.
I've tried using vscode's plugin "Remote - Containers" following this tutorial and also pycharm professional, which comes with the possibility to run docker configurations out of the box. On both cases I had the same result: I run the IDE configuration to attach to the container and its local website suddenly stops working, showing "this site can't be reached".
When using pycharm, I noticed that Docker Desktop shows that the backend container changed its port to 54762. But I also tried that port with no luck.
I also used this command:
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id
to get the container ip (172.18.0.4) and tried that with both ports, again, same result.
Any ideas?
Pycharm configuration
Interpreter:
This works in the sense that I can watch the libraries code installed inside the container:
Run/Debug configuration. This configuration succeeds in the sense that I can start it and it seems to be attached correctly to the backend container... though the problem previously described appears.
So, there were many things taking part in this, since this is an already huge and full of technical debt project.
But the main one is that the docker-compose I was using was running the server using uwsgi on production mode, which tampered many things... amongst which were pycharm's ability to successfully attach to the running container, debug, etc.
I was eventually able to create a new docker-compose.dev.yml file that overrided the main docker-compose file, only to change the backend server command for flask on development mode. That fixed everything.
Be mindful that for some reason flask run command inside a docker container does not allow you to see the website properly until you pass a -host=0.0.0.0 option to it. More in https://stackoverflow.com/a/30329547/5750078
Before I burn hours trying it out I wanted to ask the community is this even possible?
Scenario:
Running Goland on host (may be any OS)
Running Go dev env in Alpine based container
Code on host volume mapped to container
Can I attach the Goland debugger (Delve) to a Go process in the container? I'm assuming I can run delve in the container headless and run the client on the host, punching whatever port is required? Will I have binary compatibility issues if the host is not linux?
I'd rather not duplicate the entire post in this answer, but have a look at this resource on how to use containers to run applications you write https://blog.jetbrains.com/go/2018/04/30/debugging-containerized-go-applications/
To answer this specifically, as long as you have Go, the application sources, and all dependencies installed on the host machine, you can develop in GoLand and then, using a mapped volume, you can also run it from the container.
However, this workflow sounds more like the workflow you'd normally have using VMs not containers, which is why in the above article all the running/debugging is done using the actual containers, rather than using bash inside a container to run those commands.
I am using VS2017 docker support. VS created DockerFile for me and when I build docker-compose file, it creates the container and runs the app on 172.x.x.x IP address. But I want to run my application on localhost.
I did many things but nothing worked. Followed the docker docs as a starter and building microsoft sample app . The second link is working perfectly but I get HTTP Error 404 when tried the first link approach.
Any help is appreciated.
Most likely a different application already runs at port 80. You'll have to forward your web site to a different port, eg:
docker run -d -p 5000:80 --name myapp myasp
And point your browser to http://localhost:5000.
When you start a container you specify which inner ports will be exposed as ports on the host through the -p option. -p 80:80 exposes the inner port 80 used by web sites to the host's port 80.
Docker won't complain though if another application already listens at port 80, like IIS, another web application or any tool with a web interface that runs on 80 by default.
The solution is to:
Make sure nothing else runs on port 80 or
Forward to a different port.
Forwarding to a different port is a lot easier.
To ensure that you can connect to a port, use the telnet command, eg :
telnet localhost 5000
If you get a blank window immediatelly, it means a server is up and running on this port. If you get a message and timeout after a while, it means nobody is running. You anc use this both to check for free ports and ensure you can connect to your container web app.
PS I run into this just a week ago, as I was trying to set up a SQL Server container for tests. I run 1 default and 2 named instances already, and docker didn't complain at all when I tried to create the container. Took me a while to realize what was wrong.
In order to access the example posted on Docker Docs, that you pointed out as not working, follow the below steps,
1 - List all the running docker containers
docker ps -a
After you run this command you should be able to view all your docker containers that are currently running and you should see a container with the name webserver listed there, if you have followed the docker docs example correctly.
2 - Get the IP address where your webserver container is running. To do that run the following command.
docker inspect -f "{{ .NetworkSettings.Networks.nat.IPAddress }}" webserver
You should now get the IP address which the webserver container is running, hope you are familiar with this step as it was even available within the building Microsoft sample app example that you attached with the question.
Access the IP address you get once running the above command and you should see the desired output.
Answering to your first question (accessing docker container with localhost in docker for windows), in Windows host you cannot access the container with localhost due to a limitation in the default NAT network stack. A more detailed explanation for this issue can be obtained by visiting this link. Seems like the docker documentation is not yet updated but this issue only exists in Windows hosts.
There is an issue reported for this as well - Follow this link to see that.
Hope this helps you out.
EDIT
The solution for this issue seems to be coming in a future Windows release. Yet that release comes out this limitation is available in Windows host. Follow this link -> https://github.com/MicrosoftDocs/Virtualization-Documentation/issues/181
For those who encountering this issue in 2022, changing localhost to 127.0.0.1 solved an issue for me.
There is other problem too
You must have correct order with parameters
This is WRONG
docker run container:latest -p 5001:80
This sequence start container but parameter -p is ignore, therefore container have no ports mapping
This is good
docker run -p 5001:80 container:latest
I'm just getting started with Docker, and I read a ton of documentation and tutorials yesterday, but I can't find where I read about replacing an external service using a linked container, and I'm not even sure which terminology to search for.
Say there is an apache container and a mysql container, where apache was run with a link to mysql, and has access to its ports and such. Now instead of MySQL running on the container instance, we move it to AWS RDS, for example. How do you modify the mysql container so that apache continues to run as expected? To clarify, apache would still be run with a link to a container with the alias mysql, but the mysql container would take care of getting traffic on that port sent to AWS.
Alternatively, maybe there is a container running a MySQL service, but that container is on another host. I have a vague feeling that the pattern I'm referring to would be able to handle that scenario as well. Does this sound familiar to anyone?
If the container is on another host, why not just hit the host directly and have docker be transparent with 3386 (or whatever port you're running mysql on) forwarding requests to the container? I can't think of any reason you'd want to link containers unless they're actually on the same host. Docker is great at being transparent, so clients can run things against a service in Docker from another machine as if the service was being run directly on the machine without Docker.
If you really have to have both containers on the same machine (even though the mysql container is calling out to RDS or another host), you should be able to make a new simple mysql image that just has mysql_client installed and just takes requests and forwards them to RDS.