I have a docker desktop installed on my windows pc. In that, I have self-hosted gitlab on one docker container. Today I tried to back up my gitlab by typing the following command:
docker exec -t <my-container-name> gitlab-backup create
After running this command the backup was successful and saw a message that backup is done. I then restarted my docker desktop and I waited for the container to start when the container started I accessed the gitlab interface but I saw a new gitlab instance.
I then type the following command to restore my backup:
docker exec -it <my-container-name> gitlab-backup restore
But saw the message that:
No backups found in /var/opt/gitlab/backups
Please make sure that file name ends with _gitlab_backup.tar
What can be the reason am I doing it the wrong way because I saw these commands on gitlab official website.
I have this in the docker-compose.yml file:
version: "3.6"
services:
web:
image: 'gitlab/gitlab-ce'
container_name: 'gitlab'
restart: always
hostname: 'localhost'
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url 'http://localhost:9090'
gitlab_rails['gitlab_shell_ssh_port'] = 2224
networks:
- gitlab-network
ports:
- '80:80'
- '443:443'
- '9090:9090'
- '2224:22'
volumes:
- '/srv/gitlab/config:/etc/gitlab'
- '/srv/gitlab/logs:/var/log/gitlab'
- '/srv/gitlab/data:/var/opt/gitlab'
networks:
gitlab-network:
name: gitlab-network
I used this command to run the container:
docker-compose up --build --abort-on-container-exit
If you started your container using Volumes, try looking at C:\ProgramData\docker\volume for your backup.
The backup is normally located at: /var/opt/gitlab/backups within the container. So hopefully you mapped /var/opt/gitlab to either a volume or a bind mount.
Did you try supplying the name of the backup file, as for the omnibus install? When I've restored a backup in Docker, I basically use the omnibus instructions, but use docker exec to do it. Here are the commands I've used from my notes.
docker exec -it gitlab gitlab-ctl stop unicorn
docker exec -it gitlab gitlab-ctl stop sidekiq
docker exec -it gitlab gitlab-rake gitlab:backup:restore BACKUP=1541603057_2018_11_07_10.3.4
docker exec -it gitlab gitlab-ctl start
docker exec -it gitlab gitlab-rake gitlab:check SANITIZE=true
It looks like they added a gitlab-backup command at some point, so you can probably use that instead of gitlab-rake.
Related
Two weeks ago I created a docker-compose.yml file to start two services, but this week when I try to start those services Docker appends a "-1" to the service name. I am using Docker Desktop on a Windows 10 machine. Here is my yml file:
services:
pgdatabase:
image: postgres:13
environment:
- POSTGRES_USER=####
- POSTGRES_PASSWORD=####
- POSTGRES_DB=ny_taxi
volumes:
- "./ny_taxi_postgres_data:/var/lib/postgresql/data:rw"
ports:
- "5432:5432"
pgadmin:
image: dpage/pgadmin4
environment:
- PGADMIN_DEFAULT_EMAIL=#########.com
- PGADMIN_DEFAULT_PASSWORD=####
ports:
- "8080:80"
This worked perfectly when I created it, but now when I run docker-compose up the containers that get created are pgadmin-1 and pgdatabase-1.
If I then run docker-compose down, and do a docker ps the output shows that no containers are running. However, if I run docker-compose config --services I get the following:
pgadmin
pgdatabase
Restarting Docker does nothing, and the issue occurs even if I delete all containers and all volumes from Docker Desktop.
docker-compose start returns service "pgadmin" has no container to start. If I run docker-compose up and then docker-compose start pgadmin I get no output from the command line. However, listing the active containers after doing this still only shows pgadmin-1. Running docker-compose down after these steps does not resolve the issue.
docker rm -f pgadmin returns Error: No such container: pgadmin.
docker service rm pgadmin returns Error: No such service: pgadmin.
docker-compose up -d --force-recreate --renew-anon-volumes just creates pgadmin-1 and pgdatabase-1 again.
I created a Gitlab CI CD pipline with the gitlab runner and gitlab itself.
right now everything runs besides one simple script.
It does not copy any files to the volume.
I'm using docker-compose 2.7
I also have to say, that I'm not 100% sure about the volumes.
Here is an abstract of my .gitlab-ci.yml
stages:
- build_single
- test_single
- clean_single
- build_lb
- test_lb
- clean_lb
Build_single:
stage: build_single
script:
- docker --version
- docker-compose --version
- docker-compose -f ./NodeApp/docker-compose.yml up --scale slave=1 -d
- docker-compose -f ./traefik/docker-compose_single.yml up -d
- docker-compose -f ./DockerJMeter/docker-compose.yml up --scale slave=10 -d
When I'm using ls, all the files are in the correct folder.
Docker-compose:
version: '3.7'
services:
reverse-proxy:
# The official v2.0 Traefik docker image
image: traefik:v2.0
# Enables the web UI and tells Traefik to listen to docker
command: --api.insecure=true --providers.docker
ports:
# The HTTP port
- "7000:80"
# The Web UI (enabled by --api.insecure=true)
- "7080:8080"
volumes:
# So that Traefik can listen to the Docker events
- /var/run/docker.sock:/var/run/docker.sock
- ./traefik/config_lb:/etc/traefik
networks:
- default
networks:
default:
driver: bridge
name: traefik
For JMeter I'm using the copy statement to get the configuration files after it startet. but for traefik I need the files on the booting process for traefik.
I thought ./traefik/config_lb:/etc/traefik with a '.' in front of traefik creates a path in respect to the docker-compose file.
Is this wrong?
I also have to say, that gitlab and the runner are both dockerized on the host system. So the instanz of docker is running on the host system, and gitlab-runner also using the docker.sock.
Best Regards!
When you use the gitlab-runner in a docker container, it starts another container, the gitlab-executor based on an image that you specify in .gitlab-ci.yml. The gitlab-runner uses the docker sock of the docker host (see /var/run/docker.sock:/var/run/docker.sock in /etc/gitlab-runner/config.toml) to start the executor.
When you then start another container using docker-compose, again the docker sock is used. Any source paths that you specify in docker-compose.yml have to point to paths on the docker host, otherwise the destination in the created service will be empty (given the source path does not exist).
So what you need to do is find the path to traefik/config_lb on the docker host and provide that as the source.
I'm taking over a website https://www.funfun.io. Unfortunately, I cannot reach the previous developer anymore.
This is a AngularJS+Node+Express+MongoDB application. He decided to use bitnami+docker+nginx in the server. Here is docker-compose.yml:
version: "3"
services:
funfun-node:
image: funfun
restart: always
build: .
environment:
- MONGODB_URI=mongodb://mongodb:27017/news
env_file:
- ./.env
depends_on:
- mongodb
funfun-nginx:
image: funfun-nginx
restart: always
build:
context: .
dockerfile: Dockerfile.nginx
ports:
- "3000:8443"
depends_on:
- funfun-node
mongodb:
image: mongo:3.4
restart: always
volumes:
- "10studio-mongo:/data/db"
ports:
- "27018:27017"
networks:
default:
external:
name: 10studio
volumes:
10studio-mongo:
driver: local
Dockerfile.nginx:
FROM bitnami/nginx:1.16
COPY ./funfun.io /opt/bitnami/nginx/conf/server_blocks/default.conf
COPY ./ssl/MyCompanyLocalhost.cer /opt/MyCompanyLocalhost.cer
COPY ./ssl/MyCompanyLocalhost.pvk /opt/MyCompanyLocalhost.pvk
Dockerfile:
FROM node:12
RUN npm install -g yarn nrm --registry=https://registry.npm.taobao.org && nrm use cnpm
COPY ./package.json /opt/funfun/package.json
WORKDIR /opt/funfun
RUN yarn
COPY ./ /opt/funfun/
CMD yarn start
In my local machine, I could use npm start to test the website in a web browser.
I have access to the Ubuntu server. But I'm new to bitnami+docker+nginx, I have the following questions:
In the command line of Ubuntu server, how could I check if the service is running (besides launching the website in a browser)?
How could I shut down and restart the service?
Previously, without docker, we could start mongodb by sudo systemctl enable mongod. Now, with docker, how could we start mongodb?
First of all, to deploy the services mentioned in the compose file locally, you should run the below command
docker-compose up
docker-compose up -d # in the background
After running the above command docker containers will be created and available on your machine.
To list the running containers
docker ps
docker-compose ps
To stop containers
docker stop ${container name}
docker-compose stop
mongodb is part of the docker-compose file and it will be running once you start other services. It will also be restarted automatically in case it crashes or you restarted your machine.
One final note, since you are using external networks you may need to create the network before starting the services.
1.
docker-compose ps will give you the state of your containers
2.
docker-compose stop will stop your containers, keeping their state then you may start them as their are using docker-compose up
docker-compose kill will delete your containers
docker-compose restart will restart your containers
3.
By declaring your mongodb using an official mongo image your container start when you do docker-compose up without any other intervention.
Or you can add command: mongod --auth directly into your docker-compose.yml
the official documentation of docker is very detailed and help a lot for all of this, keep looking on it https://docs.docker.com/compose/
I used this tutorial to install on my local Mac airflow with docker : http://www.marknagelberg.com/getting-started-with-airflow-using-docker/ and everything worked well. I have the UI and I can connect my dags.
However, when I trigger manually my task it is not running and I get this error message.
My task on the web UI: .
I work on a Mac and I have used this code :
docker pull puckel/docker-airflow
docker run -d -p 8080:8080 -v /path/to/dags:/usr/local/airflow/dags puckel/docker-airflow webserver
Does someone have an idea on how I could fix this ? Thanks for your help
is the airflow scheduler running?
The airflow webserver can only show the dags & task status. The scheduler run the tasks accordingly.
for the command your showed above, there is no call for airflow scheduler.
So, you can run below command in another console.
docker ps |grep airflow
Use above command to get the container id.
docker exec -it [container ID] airflow scheduler
For the ultimate way, I suugested to use docker-compose
Instead of docker, using docker-compose to manage all you docker stack related case.
Here is the sample code for my puckel/docker-airflow based airflow
version: '3'
services:
postgres:
image: 'postgres:12'
environment:
- POSTGRES_USER=airflow
- POSTGRES_PASSWORD=airflow
- POSTGRES_DB=airflow
volumes:
- ./pg_data:/var/lib/postgresql/data
webserver:
image: puckel/docker-airflow:1.10.9
restart: always
depends_on:
- postgres
environment:
- LOAD_EX=n
- EXECUTOR=Local
- AIRFLOW__CORE__SQL_ALCHEMY_CONN=postgres://airflow:airflow#postgres/airflow
volumes:
- ./dags:/usr/local/airflow/dags
ports:
- "8080:8080"
command: webserver
healthcheck:
test: ["CMD-SHELL", "[ -f /usr/local/airflow/airflow-webserver.pid ]"]
To use it, You can
1- created a project folder. copy above reference code into
docker-compose.yml
2- check if configuration is right by following docker-compose command
docker-compose config
3- enabled the docker-compse project by:
docker-compose up
Note: if you do not want to see detail logs, you can run it in backgroud by:
docker-compose up -d
Now, you can enjoy airflow UI in you browser. by following url
http://<the host ip>:8080
if you like above answer, pls vote it up.
Good luck
WY
I have a docker compose file that links my server to a redis image:
version: '3'
services:
api:
build: .
command: npm run dev
environment:
NODE_ENV: development
volumes:
- .:/home/node/code
- /home/node/code/node_modules
- /home/node/code/build/Release
ports:
- "1389:1389"
depends_on:
- redis
redis:
image: redis:alpine
I am wondering how could I open a redis-cli against the Redis container started by docker-compose to directly modify ke/value pairs. I tried with docker attach but it does not open any shell.
Use docker exec -it your_container_name /bin/bash to enter into redis container, then execute redis-cli to modify key-value pair.
See https://docs.docker.com/engine/reference/commandline/exec/
Install the Redis CLI on your host. Edit the YAML file to publish Redis's port
services:
redis:
image: redis:alpine
ports: ["6379:6379"]
Then run docker-compose up to redeploy the container, and you can run redis-cli from the host without needing to directly interact with Docker.
Using /bin/bash as the command (as suggested in the accepted solution) doesn't work for me with the latest redis:alpine image on Linux.
Instead, this worked:
docker exec -it your_container_name redis-cli