Can't Get Rid of Old Docker-Compose Services - docker

Two weeks ago I created a docker-compose.yml file to start two services, but this week when I try to start those services Docker appends a "-1" to the service name. I am using Docker Desktop on a Windows 10 machine. Here is my yml file:
services:
pgdatabase:
image: postgres:13
environment:
- POSTGRES_USER=####
- POSTGRES_PASSWORD=####
- POSTGRES_DB=ny_taxi
volumes:
- "./ny_taxi_postgres_data:/var/lib/postgresql/data:rw"
ports:
- "5432:5432"
pgadmin:
image: dpage/pgadmin4
environment:
- PGADMIN_DEFAULT_EMAIL=#########.com
- PGADMIN_DEFAULT_PASSWORD=####
ports:
- "8080:80"
This worked perfectly when I created it, but now when I run docker-compose up the containers that get created are pgadmin-1 and pgdatabase-1.
If I then run docker-compose down, and do a docker ps the output shows that no containers are running. However, if I run docker-compose config --services I get the following:
pgadmin
pgdatabase
Restarting Docker does nothing, and the issue occurs even if I delete all containers and all volumes from Docker Desktop.
docker-compose start returns service "pgadmin" has no container to start. If I run docker-compose up and then docker-compose start pgadmin I get no output from the command line. However, listing the active containers after doing this still only shows pgadmin-1. Running docker-compose down after these steps does not resolve the issue.
docker rm -f pgadmin returns Error: No such container: pgadmin.
docker service rm pgadmin returns Error: No such service: pgadmin.
docker-compose up -d --force-recreate --renew-anon-volumes just creates pgadmin-1 and pgdatabase-1 again.

Related

Docker ps doesn't show containers created/runing with docker-compose

I'm trying to understand why I can't see containers created with docker-compose up -d using docker ps. If I go to the folder where is the docker-compose.yaml located and run docker-compose ps I can see the container runing. I did the same on windows because i'm using ubuntu and it works as expected, I can see the container just runing docker ps. Could anyone give me a hint about this behavior, please? Thanks in advance.
Environment:
Docker version 20.10.17, build 100c701
docker-compose version 1.25.0, build unknown
Ubuntu 20.04.4 LTS
in my terminal i see this output:
/GIT/project$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
/GIT/project$ cd scripts/
/GIT/project/scripts$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
/GIT/project/scripts$ docker-compose ps
Name Command State Ports
-----------------------------------------------------------------------------------------------------
scripts_db_1 docker-entrypoint.sh --def ... Up 0.0.0.0:3306->3306/tcp,:::3306->3306/tcp,
33060/tcp
/GIT/project/scripts$
docker-compose.yaml
version: '3.3'
services:
db:
image: mysql:5.7
# NOTE: use of "mysql_native_password" is not recommended: https://dev.mysql.com/doc/refman/8.0/en/upgrading-from-previous-series.html#upgrade-caching-sha2-password
# (this is just an example, not intended to be a production configuration)
command: --default-authentication-plugin=mysql_native_password
restart: always
ports:
# <Port exposed> : < MySQL Port running inside container>
- 3306:3306
expose:
# Opens port 3306 on the container
- 3306
# Where our data will be persisted
volumes:
- treip:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: changeit
MYSQL_DATABASE: treip
volumes:
treip:
I executed the container with sudo and the problem was solve. now the container apear using docker ps, so instead of docker-compose up I executed it with sudo sudo docker-compose up . Sorry, my bad.

How do I run a website in bitnami+docker+nginx

I'm taking over a website https://www.funfun.io. Unfortunately, I cannot reach the previous developer anymore.
This is a AngularJS+Node+Express+MongoDB application. He decided to use bitnami+docker+nginx in the server. Here is docker-compose.yml:
version: "3"
services:
funfun-node:
image: funfun
restart: always
build: .
environment:
- MONGODB_URI=mongodb://mongodb:27017/news
env_file:
- ./.env
depends_on:
- mongodb
funfun-nginx:
image: funfun-nginx
restart: always
build:
context: .
dockerfile: Dockerfile.nginx
ports:
- "3000:8443"
depends_on:
- funfun-node
mongodb:
image: mongo:3.4
restart: always
volumes:
- "10studio-mongo:/data/db"
ports:
- "27018:27017"
networks:
default:
external:
name: 10studio
volumes:
10studio-mongo:
driver: local
Dockerfile.nginx:
FROM bitnami/nginx:1.16
COPY ./funfun.io /opt/bitnami/nginx/conf/server_blocks/default.conf
COPY ./ssl/MyCompanyLocalhost.cer /opt/MyCompanyLocalhost.cer
COPY ./ssl/MyCompanyLocalhost.pvk /opt/MyCompanyLocalhost.pvk
Dockerfile:
FROM node:12
RUN npm install -g yarn nrm --registry=https://registry.npm.taobao.org && nrm use cnpm
COPY ./package.json /opt/funfun/package.json
WORKDIR /opt/funfun
RUN yarn
COPY ./ /opt/funfun/
CMD yarn start
In my local machine, I could use npm start to test the website in a web browser.
I have access to the Ubuntu server. But I'm new to bitnami+docker+nginx, I have the following questions:
In the command line of Ubuntu server, how could I check if the service is running (besides launching the website in a browser)?
How could I shut down and restart the service?
Previously, without docker, we could start mongodb by sudo systemctl enable mongod. Now, with docker, how could we start mongodb?
First of all, to deploy the services mentioned in the compose file locally, you should run the below command
docker-compose up
docker-compose up -d # in the background
After running the above command docker containers will be created and available on your machine.
To list the running containers
docker ps
docker-compose ps
To stop containers
docker stop ${container name}
docker-compose stop
mongodb is part of the docker-compose file and it will be running once you start other services. It will also be restarted automatically in case it crashes or you restarted your machine.
One final note, since you are using external networks you may need to create the network before starting the services.
1.
docker-compose ps will give you the state of your containers
2.
docker-compose stop will stop your containers, keeping their state then you may start them as their are using docker-compose up
docker-compose kill will delete your containers
docker-compose restart will restart your containers
3.
By declaring your mongodb using an official mongo image your container start when you do docker-compose up without any other intervention.
Or you can add command: mongod --auth directly into your docker-compose.yml
the official documentation of docker is very detailed and help a lot for all of this, keep looking on it https://docs.docker.com/compose/

Gitlab-CI backup lost by restarting Docker desktop

I have a docker desktop installed on my windows pc. In that, I have self-hosted gitlab on one docker container. Today I tried to back up my gitlab by typing the following command:
docker exec -t <my-container-name> gitlab-backup create
After running this command the backup was successful and saw a message that backup is done. I then restarted my docker desktop and I waited for the container to start when the container started I accessed the gitlab interface but I saw a new gitlab instance.
I then type the following command to restore my backup:
docker exec -it <my-container-name> gitlab-backup restore
But saw the message that:
No backups found in /var/opt/gitlab/backups
Please make sure that file name ends with _gitlab_backup.tar
What can be the reason am I doing it the wrong way because I saw these commands on gitlab official website.
I have this in the docker-compose.yml file:
version: "3.6"
services:
web:
image: 'gitlab/gitlab-ce'
container_name: 'gitlab'
restart: always
hostname: 'localhost'
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url 'http://localhost:9090'
gitlab_rails['gitlab_shell_ssh_port'] = 2224
networks:
- gitlab-network
ports:
- '80:80'
- '443:443'
- '9090:9090'
- '2224:22'
volumes:
- '/srv/gitlab/config:/etc/gitlab'
- '/srv/gitlab/logs:/var/log/gitlab'
- '/srv/gitlab/data:/var/opt/gitlab'
networks:
gitlab-network:
name: gitlab-network
I used this command to run the container:
docker-compose up --build --abort-on-container-exit
If you started your container using Volumes, try looking at C:\ProgramData\docker\volume for your backup.
The backup is normally located at: /var/opt/gitlab/backups within the container. So hopefully you mapped /var/opt/gitlab to either a volume or a bind mount.
Did you try supplying the name of the backup file, as for the omnibus install? When I've restored a backup in Docker, I basically use the omnibus instructions, but use docker exec to do it. Here are the commands I've used from my notes.
docker exec -it gitlab gitlab-ctl stop unicorn 
docker exec -it gitlab gitlab-ctl stop sidekiq 
docker exec -it gitlab gitlab-rake gitlab:backup:restore BACKUP=1541603057_2018_11_07_10.3.4
docker exec -it gitlab gitlab-ctl start 
docker exec -it gitlab gitlab-rake gitlab:check SANITIZE=true
It looks like they added a gitlab-backup command at some point, so you can probably use that instead of gitlab-rake.

After installing puckel/docker-airflow locally, no task instance is running and tasks get stuck forever

I used this tutorial to install on my local Mac airflow with docker : http://www.marknagelberg.com/getting-started-with-airflow-using-docker/ and everything worked well. I have the UI and I can connect my dags.
However, when I trigger manually my task it is not running and I get this error message.
My task on the web UI: .
I work on a Mac and I have used this code :
docker pull puckel/docker-airflow
docker run -d -p 8080:8080 -v /path/to/dags:/usr/local/airflow/dags puckel/docker-airflow webserver
Does someone have an idea on how I could fix this ? Thanks for your help
is the airflow scheduler running?
The airflow webserver can only show the dags & task status. The scheduler run the tasks accordingly.
for the command your showed above, there is no call for airflow scheduler.
So, you can run below command in another console.
docker ps |grep airflow
Use above command to get the container id.
docker exec -it [container ID] airflow scheduler
For the ultimate way, I suugested to use docker-compose
Instead of docker, using docker-compose to manage all you docker stack related case.
Here is the sample code for my puckel/docker-airflow based airflow
version: '3'
services:
postgres:
image: 'postgres:12'
environment:
- POSTGRES_USER=airflow
- POSTGRES_PASSWORD=airflow
- POSTGRES_DB=airflow
volumes:
- ./pg_data:/var/lib/postgresql/data
webserver:
image: puckel/docker-airflow:1.10.9
restart: always
depends_on:
- postgres
environment:
- LOAD_EX=n
- EXECUTOR=Local
- AIRFLOW__CORE__SQL_ALCHEMY_CONN=postgres://airflow:airflow#postgres/airflow
volumes:
- ./dags:/usr/local/airflow/dags
ports:
- "8080:8080"
command: webserver
healthcheck:
test: ["CMD-SHELL", "[ -f /usr/local/airflow/airflow-webserver.pid ]"]
To use it, You can
1- created a project folder. copy above reference code into
docker-compose.yml
2- check if configuration is right by following docker-compose command
docker-compose config
3- enabled the docker-compse project by:
docker-compose up
Note: if you do not want to see detail logs, you can run it in backgroud by:
docker-compose up -d
Now, you can enjoy airflow UI in you browser. by following url
http://<the host ip>:8080
if you like above answer, pls vote it up.
Good luck
WY

docker-compose start "ERROR: No containers to start"

I am trying to use Docker Compose (with Docker Machine on Windows) to launch a group of Docker containers.
My docker-compose.yml:
version: '2'
services:
postgres:
build: ./postgres
environment:
- POSTGRES_PASSWORD=mysecretpassword
frontend:
build: ./frontend
ports:
- "4567:4567"
depends_on:
- postgres
backend:
build: ./backend
ports:
- "5000:5000"
depends_on:
- postgres
docker-compose build runs successfully. When I run docker-compose start I get the following output:
Starting postgres ... done
Starting frontend ... done
Starting backend ... done
ERROR: No containers to start
I did confirm that the docker containers are not running. How do I get my containers to start?
The issue here is that you haven't actually created the containers. You will have to create these containers before running them. You could use the docker-compose up instead, that will create the containers and then start them.
Or you could run docker-compose create to create the containers and then run the docker-compose start to start them.
The reason why you saw the error is that docker-compose start and docker-compose restart assume that the containers already exist.
If you want to build and start containers, use
docker-compose up
If you only want to build the containers, use
docker-compose up --no-start
Afterwards, docker-compose {start,restart,stop} should work as expected.
There used to be a docker-compose create command, but it is now deprecated in favor of docker-compose up --no-start.

Resources