Docker and Airflow - Error to init the container - docker

I got the below issue when I try to start the container where I am running airflow:
"Incorrect remote log configuration. Please check the configuration of
option 'host' in "
That is happening after I change the remote_logging from False to True.
My container is not starting because of this and hence I don't get to come back to the previous configuration. Is there a way I can get come back to the previous version or fix it without needing to create a new environment?

If you have a container that won't keep running, but you want to start it for experimentation or modification, you can always overwrite the entrypoint config like this:
docker run -it --entrypoint sh <docker-image>
Once you have the container shell open, you can make your modifications and then start the process that the container would have normally started, e.g. Airflow.

Related

Docker container run and pause right after

I have a docker service/image I'm using which restarts as soon as starts.
I'm unable to fix the issue by getting into the container using
docker exec -it CONTAIER_NAME
since it restarts/terminates as soon as it boots.
Is there anyway I can pause it directly? I can't rebuild the image as I don't have access to the internet on the server. (Yes I'm sure the rebuild or build--no-cache will fix the issue)
The issue should be easily fixable if I modify permissions for a certain folder, but I'm not sure how to do this inside the container when I can't access it. The image doesn't have a docker file and is used directly from the docker hub.
If we do not get any information from the container's logs, we have the option to start the process "manually". For this, we start the container with an interactive terminal (-it, -i to keep STDIN open, -t to open a pseudo-TTY) and override the entrypoint to be a shell, e.g. bash. For good measure, we want the container to be removed when it terminates (i.e. when we exit the termainal, --rm):
docker run ... -it --rm --entrypoint /bin/bash
Once inside the container, we can start the process that would have normally started through the entrypoint from the container's terminal and extract error information from here.

multi execute connection windows display the same session information of one container

In my CentOS server I use docker created a container,
I opened two sessions connected to the container by command:
docker attach container-name
but there is an issue, in each window I execute command the other window is display the same information.
so I cannot control the container when it is installing package.
is it possible to avoid this issue?
The docker attach command attaches to the currently running process as defined by CMD. You can attach as many times as you want, but they all connect to the same process.
If you want to access the container and have different sessions to it, use:
docker exec -it container-name bash
Or whatever shell is available. bash is common, but you may need to use sh or find out what's used, if any is there at all. Some containers are super stripped down.
The -it flag enables "interactive" mode, as otherwise it just runs that command and shows you the output.

Difference between docker-compose run, start, up

I'm new in docker.
What is the difference between these?
docker run 'an image'
docker-compose run 'something'
docker-compose start 'docker-compose.yml'
docker-compose up 'docker-compose.yml'
Thanks in advance.
https://docs.docker.com/compose/faq/#whats-the-difference-between-up-run-and-start
What’s the difference between up, run, and start?
Typically, you want docker-compose up. Use up to start or restart all the services defined in a docker-compose.yml. In the default “attached” mode, you see all the logs from all the containers. In “detached” mode (-d), Compose exits after starting the containers, but the containers continue to run in the background.
The docker-compose run command is for running “one-off” or “adhoc” tasks. It requires the service name you want to run and only starts containers for services that the running service depends on. Use run to run tests or perform an administrative task such as removing or adding data to a data volume container. The run command acts like docker run -ti in that it opens an interactive terminal to the container and returns an exit status matching the exit status of the process in the container.
The docker-compose start command is useful only to restart containers that were previously created, but were stopped. It never creates new containers.
Also: https://docs.docker.com/compose/reference/

Is it possible to access the entry command bash on a running docker container?

I have a node docker container on which i'm running a dev server.
In my docker-compose.yml file, the entry command is :
...
command: start-dev-server
...
Where start-dev-server points to a script that starts the server after a vendor install :
// /usr/local/bin/start-dev-server
#!/usr/bin/env bash
# install node modules if missing
npm i
# start the dev server
npm run start
So when I start my container, the server will also start.
I know that I can access my container in bash via the following command :
docker exec -it my-container bash
But there I can't stop or restart my server.
Is there a way to access the ssh with the started command ? (to see the server logs for example, or to stop & restart it).
Maybe I take it by the wrong path here because the entry command isn't supposed to be stopped ? So in this case, would anyone has a solution that could allow me to start my server & control it in a more flexible way ?
The best practices says that you should see the container as your server. If you want to stop it, stop the container (docker stop my-container), if you want to restart it, restart the container (docker restart my-container). Your server should log to stdout, so you can see the logs using docker logs -f my-container. So, you're right, the command isn't supposed to be stopped, as it will stop the container.

How to "start over" with Docker?

I am trying to run Tomcat in a Docker container with limited success. After I tried various things, I wanted to "reset" without completely deleting everything. I did stop and remove the virtual machine from the Virtualbox console. I then tried docker-machine create and docker-machine restart. My question is, if things reach a state in which the application appears to be hanging, what is the best procedure for starting from scratch that does not involve, for example, actually rebuilding the Docker container?
EDIT: All I am now asking is, given that "docker version" returns Client information but when it reaches the Server information I get the "An error occurred trying to connect" message, is what now needs to be done? What is it not connecting to? I tried with apparent success "docker-machine restart" but got no further with "docker version" after that.
First, don't delete the boot2docker VM itself (created by docker-machine)
If you want to reset, you might have to delete the container and image (quickly rebuilt with a docker build). But you can stay in the same docker-based boot2docker VM. No need for deletion.
Retrying a docker container session simply involve killing/removing the current container, and doing a new docker run.
Then, don't forget check what is not working: does a docker ps -a shows your container running? Can you access Tomcat from the boot2docker Linux host? From your actual OS host?
Based on that diagnostic and the exact content of your Dockerfile, you will be able to debug the issue.
The main issue might come from the fact docker command are executed from outside the VM.
That works only if the commands from docker-machine env <machine-name> are set.
See docker-machine env:
For cmd.exe:
$ docker-machine.exe env --shell cmd dev
set DOCKER_TLS_VERIFY=1
set DOCKER_HOST=tcp://192.168.99.101:2376
set DOCKER_CERT_PATH=C:\Users\captain\.docker\machine\machines\dev
set DOCKER_MACHINE_NAME=dev
# Run this command to configure your shell: copy and paste the above values into your command prompt.
(replace "dev" by the name of your docker machine here, probably "default")
But it is also perfectly fine to make all docker command from within the VM. No "env" to set.
Everything is on the VM (images, Dockerfile which can be on the Windows host as well, as long as it is under C:\Users\<yourLogin>, since that folder is automatically mounted as /c/Users/<yourLogin>)

Resources