I have made two Docker containers using:
docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=Password1234" -p1433:1433 --name sql2019 -d mcr.microsoft.com/mssql/server:vNext-CTP2.0-ubuntu
and I distinguished them by changing the -p and --name section, but when I go over to Azure Data Studio and connect, I can only connect to one of them, because I enter 'localhost' in the Server section, but since both containers use 'localhost', how can I differentiate the two in Azure Data Studio? Is there any way to use the --name flag?
I would appreciate a clear answer; I am new to server stuff.
Related
My setup is based on running two Docker containers, one with an API and the other with a DB.
This methodology makes it possible that both containers have an exposed port to web services.
But what I want is that the DB container (toolname-db) can only be exposed to the API container (toolname-api). This makes sure that the DB is not not exposed to web services directly.
How do I have to alter my setup in order to make sure what I want is possible?
Currently I use the following commands:
sudo docker build -t toolname .
sudo docker run -d -p 3333:3333 --name=toolname-db mdillon/postgis
sudo docker run -it -p 4444:4444 --name=toolname-api --network=host -d toolname
A container will only be reachable from outside Docker space if it has published ports. So you need to remove the -p option from your database container.
For the two containers to be able to talk to each other they need to be on the same network. Docker's default here is for compatibility with what's now a very old networking setup, so you need to manually create a network, though it doesn't need any special setting.
Finally, you don't need --net host. That disables all of Docker's networking setup; port mappings with -p are disabled, and you can't communicate with containers that don't themselves have ports published. (I usually see it recommended as a hack to work around hard-coded localhost connection strings.)
That leaves your final setup as:
sudo docker build -t toolname .
sudo docker network create tool
sudo docker run -d --net=tool --name=toolname-db mdillon/postgis
sudo docker run -d --net=tool -p 4444:4444 --name=toolname-api toolname
As #BentCoder suggests in a comment, it's very common to use Docker Compose to run multiple containers together. If you do, it creates a network for you which can save you a step.
I have pulled hyperledger/composer-rest-server docker image , Now if i wanted to run this docker image then on which port should i expose ? Like mentioned below.
docker run --name composer-rest-server --publish XXXX:YYYY --detach hyperledger/composer-rest-server
Here please tell me what should i replace for XXXX & YYYY ?
I run the rest server in a container using a command as follows:
docker run -d \
-e COMPOSER_CARD="admin#test-net" \
-e COMPOSER_NAMESPACES="never" \
-v ~/.composer:/home/composer/.composer \
--name rest -p 3000:3000 \
hyperledger/composer-rest-server
For the Published Port, the first element is the Port that will be used on the Docker Host, and the second is the Port it is forwarded to inside the container. (The Port inside the container will always be 3000 by default and is more complex to change.)
I'm passing 2 environment variables into the Container which the REST server will recognise - Namespaces just keeps the endpoints simple, but the COMPOSER_CARD is essential for the REST server to start properly.
I'm also sharing a volume between the Docker Host and the Container which is where the Cards are stored, so that the REST server can find the COMPOSER_CARD referred to in the environment variable.
Warning: If you are trying to test the REST server with the Development Fabric you need to understand the IP Network and Addressing of the Docker containers - by default the Composer Business Network Cards will be built using localhost as the address of the Fabric servers, but you can't use localhost in the REST server container as that will redirect inside the container and fail to find the Fabric.
There is a tutorial in the Composer Docs that is focused on Multi-User authentication, but it does also cover the networking aspects of using the REST Server Container. There is general information about the REST server here.
I have some micro services that accept arguments to run.
At some point I might need them like below:
docker run -d -e 'MODE=a' --name x_service_a x_service
docker run -d -e 'MODE=b' --name x_service_b x_service
docker run -d -e 'X_SOURCE=a' -e 'MODE'='foo' --name y_service_afoo y_service
docker run -d -e 'X_SOURCE=b' -e 'MODE'='foo' --name y_service_bfoo y_service
docker run -d -e 'X_SOURCE=b' -e 'MODE'='bar' --name y_service_bbar y_service
I do this with another service I wrote called 'coordinator' which uses docker engine api to monitor, start and stop these micro services.
The reason I can't make docker compose (as in my above example) because I can't have two running x_service with identical config.
So is it fine to manage them with docker engine API?
Services are generally scaled up and scaled down based on organizations needs. This translates to starting and stopping containers dynamically.
Many a times, the same docker image is started with different configurations. Think of a company managing various Wordpress websites for different customers.
So the answer to your question if it is a bad practice to start/stop docker containers dynamically, the answer is NO.
There are multiple ways to manage docker containers, some like to manage with just docker commands, some with docker-compose and some with more advanced management platforms.
This question already has answers here:
Assign static IP to Docker container
(8 answers)
Closed 7 years ago.
I have a docker-compose setup with a bunch of backend services (postgres, redis, ...), a few apps (rails, node, ...) and an nginx on top of it.
The apps are connected to the databases using docker env variables (e.g. DOCKERCOMPOSEDEMO_POSTGRES_1_PORT_5432_TCP_ADDR), and the nginx is connected to the apps using the docker generated /etc/hosts: (e.g. upstream nodeapp1-upstream { server dockercomposedemo_node_app1_1:3000; })
The problem is that each time I restart some service it gets a new IP address, and thus everything on top of it can't connect to it any more, so restarting a rails app requires to restart nginx, and restarting a database requires to restart the apps and the nginx.
Am I doing somethings wrong, or is it the intended behaviour? Always restarting all that stuff doesn't look like a good solution.
Thank you
It is an intended behaviour, there are many ways how to avoid restart of dependent services, I'm using next approach:
I run most of my dockerized services tied to own static ips using the next approach:
I create ip aliases for all services on docker host
Then I run each service redirecting ports from this ip into container so each service have own static ip which could be used by external users and other containers.
Sample:
docker run --name dns --restart=always -d -p 172.16.177.20:53:53/udp dns
docker run --name registry --restart=always -d -p 172.16.177.12:80:5000 registry
docker run --name cache --restart=always -d -p 172.16.177.13:80:3142 -v /data/cache:/var/cache/apt-cacher-ng cache
docker run --name mirror --restart=always -d -p 172.16.177.19:80:80 -v /data/mirror:/usr/share/nginx/html:ro mirror
...
I have multiple docker containers on a single machine. On each container is running a process and a web server that provides an API for the process.
My question is, how can I access the API from my browser when the default port is 80? To be able to access the web server inside docker container I do the following:
sudo docker run -p 80:80 -t -i <yourname>/<imagename>
This way I can do from my computers terminal:
curl http://hostIP:80/foobar
But how to handle this with multiple containers and multiple web servers?
You can either expose multiple ports, e.g.
docker run -p 8080:80 -t -i <yourname>/<imagename>
docker run -p 8081:80 -t -i <yourname1>/<imagename1>
or put a proxy ( nginx, apache, varnish, etc.) in front of your API containers.
Update:
The easiest way to do a proxy would be to link it to the API containers, e.g. having apache config
RewriteRule ^api1/(.*)$ http://api1/$1 [proxy]
RewriteRule ^api2/(.*)$ http://api2/$1 [proxy]
you may run your containers like this:
docker run --name api1 <yourname>/<imagename>
docker run --name api2 <yourname1>/<imagename1>
docker run --link api1:api1 --link api2:api2 -p 80:80 <my_proxy_container>
This might be somewhat cumbersome though if you need to restart the api containers as the proxy container would have to be restarted either (links are fairly static at docker yet). If this becomes a problem, you might look at approaches like fig or autoupdated proxy configuration : http://jasonwilder.com/blog/2014/03/25/automated-nginx-reverse-proxy-for-docker/ . The later link also shows proxying with nginx.
Update II:
In a more modern versions of docker it is possible to use user defined network instead of the links shown above to overcome some of the inconveniences of the deprecated link mechanism.
Only a single process is allowed to be bound to a port at a time. So running multiple containers means each will be exposed on a different port number. Docker can do this automatically for you by using the "-P" attribute.
sudo docker run -P -t -i <yourname>/<imagename>
You can use the "docker port" and "docker inspect" commands to see the actual port number allocated to each container.