Gitlab-Runner runs without port direction - docker

I am using gitlab-runner inside a container and register from that container.
docker run -d --name gitlab-runner --restart always \
-v /srv/gitlab-runner/config:/etc/gitlab-runner \
-v /var/run/docker.sock:/var/run/docker.sock \
gitlab/gitlab-runner:latest
I started my container with that commands. And everything works as expected. However as you can see i didn't mention about ports inside my commands. So is it using something else(i dont know what)? It still work fine even if i changed the network(my custom network).
I am just a newbee about docker but the definition about container says that: each container has an isolated environment and cant communicate outside without port directions. Right?

You don't have to bind a port to the hoster because the runner will fetch your Gitlab answer periodically and Gitlab will give it jobs to do.
The runner initiates the connexion so you don't have to set an entry point.

Related

Prefect UI cannot connect to Orion API when deployed on a remote virtual machine

Setup
I have a Physical Machine (PM) running on Debian GNU/Linux 11 (bullseye) and a remote Virtual Machine (VM) running on AlmaLinux 8.7 (Stone Smilodon) on the same network.
Goal
I would like to run a container with a Prefect server on the VM and to access the Prefect UI from anywhere on the network by requesting http://<VM_ip>:4200.
This would allow for a local development of the new Flows and then a simple way to run them. (For anyone who may have used the docker-compose.yaml from Airflow, I would like to reproduce this behavior)
Issue
On my PM, I can run Prefect either directly (prefect orion start), or in a container (docker run --network host -it prefecthq/prefect:2-latest prefect orion start). In these cases, I can see the relevant Flows' executions, etc.
The problem arises when I try to relocate my Prefect server onto my VM.
It seems whatever I tries, I ultimately end up with a Can't connect to Orion API at http://0.0.0.0:4200/api. Check that it's accessible from your machine. error message when trying to open the Prefect UI on my PM. Other than this, Prefect seems to be running correctly in the container (i.e I can build, apply and run Flow deployments)
What have I tried
The main thing I have tried is the following command:
docker run \
--name prefect \
--env PREFECT_ORION_API_HOST=0.0.0.0 \
-v ~/containers/prefect/deployments:/deployments \
-p 4200:4200 \
-d prefecthq/prefect:2-latest \
prefect orion start
As far as I understood it, it should have done the same as the container deployment on the PM, while exposing the port 4200 to authorize access from outside.
Where I have looked
I tried asking my question on the Prefect Discourse but, while I have received answers, none of them seem to have done the trick. Among the answers was looking into the Prefect Docker compose method, but this also fails in the same fashion.
Where is the error here? Is it at all possible? Thanks for reading, commenting and answering!
You could do:
docker run -p 4200:4200 prefecthq/prefect:2-latest prefect orion start --host 0.0.0.0
prefect config set PREFECT_API_URL = "http://0.0.0.0:4200/api"

How to run Hyperledger Composer Rest Server docker image?

I have pulled hyperledger/composer-rest-server docker image , Now if i wanted to run this docker image then on which port should i expose ? Like mentioned below.
docker run --name composer-rest-server --publish XXXX:YYYY --detach hyperledger/composer-rest-server
Here please tell me what should i replace for XXXX & YYYY ?
I run the rest server in a container using a command as follows:
docker run -d \
-e COMPOSER_CARD="admin#test-net" \
-e COMPOSER_NAMESPACES="never" \
-v ~/.composer:/home/composer/.composer \
--name rest -p 3000:3000 \
hyperledger/composer-rest-server
For the Published Port, the first element is the Port that will be used on the Docker Host, and the second is the Port it is forwarded to inside the container. (The Port inside the container will always be 3000 by default and is more complex to change.)
I'm passing 2 environment variables into the Container which the REST server will recognise - Namespaces just keeps the endpoints simple, but the COMPOSER_CARD is essential for the REST server to start properly.
I'm also sharing a volume between the Docker Host and the Container which is where the Cards are stored, so that the REST server can find the COMPOSER_CARD referred to in the environment variable.
Warning: If you are trying to test the REST server with the Development Fabric you need to understand the IP Network and Addressing of the Docker containers - by default the Composer Business Network Cards will be built using localhost as the address of the Fabric servers, but you can't use localhost in the REST server container as that will redirect inside the container and fail to find the Fabric.
There is a tutorial in the Composer Docs that is focused on Multi-User authentication, but it does also cover the networking aspects of using the REST Server Container. There is general information about the REST server here.

How to run docker container in production in Centos?

So on my server, I run my docker container as a daemon process via:
docker run -p 80:80 -td example
It seems to work fine - for now!
But, what if I restart my server or my docker container crashes? Then it doesn't work fine.
What is the best/conventional/standard way to keep my docker container running?
Thanks!
What you are looking for is an orchestrator. An orchestrator will manage your container life cycle for you. You might want to try the Docker orchestrator, Swarm. You can also check Kubernetes or Mesos.
You could also use Docker compose to make things easier.
As #wassim-dif pointed out you might want to use an orchestrator.
If you just want your docker container to restart automatically in case of failure and when you restart your server then you need to run it using the --restart flag, such as:
docker run -p 80:80 -td --restart=always example
This way, your container will restart automatically.

Docker container as daemon

On my host machine, I have installed docker. Then I pull a Jenkins image.
I want to run that image like daemon service like some services runs on my host machine after rebooting my machine every time. And how can I fix Jenkins port permanent(like 8080) in mine docker?
docker run -d --restart always -p 8080:8080 -p 50000:50000 -v /your/home:/var/jenkins_home jenkins
-d: for running the container in background
--restart always: for the container to always restart (unless manually stopped), it will start automatically at boot.
The rest of the arguments are from the jenkins image documentation, you may need to adapt your port mapping and volume path.

How to know server's IP address where jenkins deploys and builds code

Our jenkins deploys/builds code in a docker container. Every time Jenkins deploys code, it does so in different instance of docker container. How do I know IP address of that container along with port id? Immediately after deployment I want to run my Build Validation test against the application residing in that docker container.
Any insight would be appreciated.
You need to specify -p in docker run and then use docker inspect to grab the port:
docker run -d -p 80 --name app crramirez/limesurvey
export THEPORT=`docker inspect --format='{{(index (index .NetworkSettings.Ports "80/tcp") 0).HostPort}}' app`
Then you call your application: wget http://localhost:${THEPORT}
All of this is necessary if you have many containers running concurrently. But if you only deploy one container at time. You only need to delete the previous container and do a docker run -d -p 80:80 call your application using your port wget http://localhost:80
Regards

Resources