Link docker container to external public IP - docker

On performing curl ifconfig.me on my machine I am able to get my public IP but the same does not works in my container.

The simple answer to my silly question is, we don't need to. Instead port binding is use to bind the container port to the system.
The public IP of system could be used with port to interact with hosted application.
Silly me!

Related

How would I bind a port to a secondary public ip while keeping the same primary ip?

Basically im trying to run 2 websites on both my ip's but I cant figure out how to bind a port to it. Im running ubuntu 20.04. The secondary IP is one I got from my VPS provider.
I havent ever used 2 ip's before so any help would be appreciated :)
I have tried using sudo ip addr add xxx.xxx.xxx.xxx/24 dev eth0 and it does add it to the interface but when I actually bind a port to it, its not reachable from outside connections
I dont know if its because I havent got any private ip's/subnet's assigned. Im a complete noob in this field xD

Docker Compose with static public IP over LAN but different with Host IP

I have the requirement where I need to expose all my containers through a static public IP.
However, the static public IP cannot be host IP because host IP must be dynamic.
The 2 solutions I found is macvlan and linux secondary IP, but base on my understanding, they cannot fulfil my need.
with macvlan, each container will get individual IP. I need to access all container through the same IP.
with linux secondary IP, I can assign a single static IP which exclusive for my docker container. However, I didn't found a way to manage the /etc/network/interface inside a docker container.
My question is:
Is it possible to set all container using same ip using macvlan?
Is there any way to manage/etc/network/interface, include ifup and ifdown inside a docker container?
Is there any alternative method
Edit:
the image is the system design for what I wish to achieve:
Assign the static IP to your host and use the ordinary docker run -p option. The host is allowed to have multiple IP addresses (it presumably already has its dynamic IP address and the Docker-internal 172.17.0.1 address) and you can use an additional parameter to docker run -p 10.10.10.10:80:8888 to bind to a specific host address (that specific address and no other, port 80, forwards to port 8888 in the container).
Another good setup is to provision a load balancer of some sort, assign the static IP address to it, and have it forward to the host. This is also helpful if you want to put some level of rate-limiting or basic HTTP filtering at this layer.
There's no specific technical barrier to running ifconfig by hand inside a container, but no off-the-shelf images expects to need to do it, which means you'll need to write all of your own images that won't really be reusable outside this specific environment. A developer might have trouble running the identical image locally, for instance.

Access Docker Container Name By DNS Subdomain

Basically, I have Docker Swarm running on physical machine with public IPv4 address and registered domain, say example.com
Now, I would like to access the running containers from Internet using their names as a subdomain.
For instance, lets say there is a mysql container running with name mysqldb. I would like to be able to access it from Internet by the following DNS:
mysqldb.example.com
Is there any way to achieve this?
I've finally found a so-called solution.
In general it's not possible :)
However, some known ports can be forwarded to a container by using HTTP reverse-proxies such as Nginx, haproxy etc.

Providing a stable url to access a docker container from another docker container

We are using docker-compose to bring up multiple containers and link them together.
We need to be able to persist the url of a service running in containerA in our data store so that we can look it up at a later date and use it to access the service from containerB. containerB should not have to know whether the service is running as a local container or not, it should just be able to grab the url and use it.
We can get the address of a linked container using envoronment variables in the standard way eg
http://$CONTAINER_A_SERVICE_PORT_9000_TCP_ADDR:$CONTAINER_A_SERVICE_PORT_9000_TCP_PORT/someresource
but my understanding is that if we store this url and try to access the service after restarting the containers, docker may have assigned a new port and/or ip to the container and the address could be invlaid.
At the moment all I can think of is exposing the port of the container on the host machine and using the public address of the host as the stable endpoint to the container but I would really like a solution that avoided going out to the public network.
Any ideas would be greatly appreciated.
I would use the hostname of serviceB that gets put into /etc/hosts

Expose Travis CI Localhost to public

My travis container opens up some html pages on localhost:8080. I'd like these to be accessible to the public while the container is running.
How do I enable this and how do I find out the public IP address for each instance. Is this even possible?
Thanks.
Used ngrok.io for an easy solution.
exposing containers to outside world in general requires you to first establish a connection from inside the container to the gateway proxy.
there are multiple solutions available: ngrok (does not provide authentication though), cloudflared argo tunnel and gw.run both provide tunnel + authentication/authorization

Resources