I try to use latest (13.0) Docker image for local development and I'm using docker-compose.yml from docker documentation for spinning up containers:
version: '2'
services:
web:
image: odoo:13.0
depends_on:
- db
ports:
- "8069:8069"
volumes:
- ./config:/etc/odoo
- ./addons/my_module:/mnt/extra-addons
db:
image: postgres:10
environment:
- POSTGRES_DB=postgres
- POSTGRES_PASSWORD=odoo
- POSTGRES_USER=odoo
- PGDATA=/var/lib/postgresql/data/pgdata
My odoo.conf:
[options]
addons_path = /mnt/extra-addons
data_dir = /var/lib/odoo
My file structure:
├── addons
│ └── my_module
│ ├──controllers
│ ├──demo
│ ├──models
│ ├──security
│ ├──views
│ ├──__init__.py
│ └──__manifest__.py
├── config
│ └── odoo.conf
├── docker-compose.yml
└── README.md
my_module is default module strucure from odoo website (with uncommented code) so I'm assuming it has no errors.
When I start the containers with command docker-compose up -d it starts the database and odoo without any errors (in docker and in browser console) but my_module is not visible inside application. I turned on developer mode and Updated Apps list inside App tab as suggested in other issues on github and SO but my_module is still not visible. Additionally if I login to container with docker exec -u root -it odoo /bin/bash I can cd to /mnt/extra-addons and I can see the contents of my_module mounted to container so it seems as Odoo does not recognize it?
I scanned the interned and found many similar problems but none of the solutions worked for me so I'm assuming I'm doing something wrong.
After some research I ended up with this docker-compose.yml which does load custom addons to my Docker:
version: '2'
services:
db:
image: postgres:11
environment:
- POSTGRES_PASSWORD=odoo
- POSTGRES_USER=odoo
- POSTGRES_DB=postgres
restart: always
odoo:
image: odoo:13
depends_on:
- db
ports:
- "8069:8069"
tty: true
command: -- --dev=reload
volumes:
- ./addons:/mnt/extra-addons
- ./etc:/etc/odoo
restart: always
odoo.conf:
[options]
addons_path = /mnt/extra-addons
logfile = /etc/odoo/odoo-server.log
Related
Here I have a network of Docker containers:
Docker-compose.yml:
version: "2"
services:
zookeeper:
image: zookeeper
container_name: zookeeper
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: confluentinc/cp-kafka:latest
container_name: broker
ports:
- '9092:9092'
depends_on:
- zookeeper
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 100
KAFKA_HEAP_OPTS: "-Xmx512M -Xms256M"
kafkacat:
build: kafkacat
container_name: kafkacat
depends_on:
- broker
entrypoint:
- /bin/bash
- -c
- /scripts/get_data.sh
And the following directory structure
├── README.md
├── docker-compose.yml
├── kafka
│ ├── kafkacat
├── kafkacat
│ ├── Dockerfile
│ ├── get_data.sh
│ ├── print_data.sh
│ └── wait_for_it.sh
And kafkacat/Dockerfile:
FROM edenhill/kafkacat:1.6.0
COPY *.sh scripts/
WORKDIR scripts
RUN chmod +x .
RUN apk add --no-cache bash
RUN apk add jq;
RUN apk add curl;
When spinning with sudo docker-compose up kafkacat, the kafkacat container returns a Connection refused error:
kafkacat | %3|1667332626.747|FAIL|rdkafka#producer-1| [thrd:broker:29092/bootstrap]: broker:29092/bootstrap: Connect to ipv4#172.18.0.3:29092 failed: Connection refused (after 1ms in state CONNECT)
kafkacat | % ERROR: Local: Broker transport failure: broker:29092/bootstrap: Connect to ipv4#172.18.0.3:29092 failed: Connection refused (after 1ms in state CONNECT)
kafkacat | % ERROR: Local: All broker connections are down: 1/1 brokers are down : terminating
This error does not occur with docker-compose up kafkacat as non-superuser.
When I deleted the empty kafka directory and its contents, i.e.:
├── README.md
├── docker-compose.yml
├── kafkacat
│ ├── Dockerfile
│ ├── get_data.sh
│ ├── print_data.sh
│ └── wait_for_it.sh
The error ceased to occur with sudo docker-compose up kafkacat.
I think it's something to do with the mechanics of the Docker build, but I really can't figure it out. Does anyone have a good explanation for why this could occur?
depends_on doesn't wait for the broker container to start.
Kafka takes some time to start, therefore, you should use that wait_for_it script before you try to do any actions against the broker
e.g.
depends_on:
- broker
entrypoint: ['bash', '-c']
command:
- /scripts/wait_for_it.sh
- broker:29092
- --
- /scripts/get_data.sh
Also, ideally, you'd pass a shell argument / environment variable for the broker to your script here, but this should be enough for now
I am trying to dockerize a microservice-based application. The api is built with nestjs and MySQL. The following is the directory structure
.
├── docker-compose.yml
├── api
│ ├── src
│ ├── Dockerfile
│ ├── package.json
│ ├── package-lock.json
│ ├── ormconfig.js
│ └── .env
├── payment
│ ├── src
│ ├── Dockerfile
│ ├── package.json
│ └── package-lock.json
├── notifications
│ ├── src
│ ├── Dockerfile
│ ├── package.json
│ └── package-lock.json
└
The following is the Dockerfile inside the api directory
FROM node:12.22.3
WORKDIR /usr/src/app
COPY package*.json .
RUN npm install
CMD ["npm", "run", "start:dev"]
The below is the docker-compose.yml file. Please note that the details for payment & notifications are not added yet in the docker-compose file.
version: '3.7'
networks:
server-network:
driver: bridge
services:
api:
image: api
build:
context: .
dockerfile: api/Dockerfile
command: npm run start:dev
volumes:
- ".:/usr/src/app"
- "/usr/src/app/node_modules"
networks:
- server-network
ports:
- '4000:4000'
depends_on:
- mysql
mysql:
image: mysql:5.7
container_name: api_db
restart: always
environment:
MYSQL_DATABASE: api
MYSQL_ROOT_USER: root
MYSQL_PASSWORD: 12345
MYSQL_ROOT_PASSWORD: root
ports:
- "3307:3306"
volumes:
- api_db_db:/var/lib/mysql
networks:
- server-network
volumes:
api_db:
Now, when I try to start the application using docker-compose up I'm getting the following error.
no such file or directory, open '/usr/src/app/package.json'
UPDATE
Tried removing the volumes and it didn't help too. Also, try to see what is there in the api by listing the contents of the directory by running
docker-compose run api ls /usr/src/app
and it shows the following contents in the folder
node_modules package-lock.json
Any help is much appreciated.
Your build: { context: } directory is set wrong.
The image build mechanism uses a build context to send files to the Docker daemon. The dockerfile: location is relative to this directory; within the Dockerfile, the left-hand side of any COPY (or ADD) directives is always interpreted as relative to this directory (even if it looks like an absolute path; and you can't step out of this directory with ..).
For the setup you show, where you have multiple self-contained applications, the easiest thing is to set context: to the directory containing the application.
build:
context: api
dockerfile: Dockerfile # the default value
Or, if you are using the default value for dockerfile, an equivalent shorthand
build: api
You need to set the build context to a parent directory if you need to share files between images (see How to include files outside of Docker's build context?). In this case, all of the COPY instructions need to be qualified with the subdirectory in the combined source tree.
# Dockerfile, when context: .
COPY api/package*.json ./
RUN npm ci
COPY api/ ./
You should not normally need the volumes: you show. These have the core effect of (1) replacing the application in the image with whatever's on the local system, which could be totally different, and then (2) replacing its node_modules directory with a Docker anonymous volume, which will never be updated to reflect changes in the package.json file. In this particular setup you also need to be very careful that the volume mappings match the filesystem layout. I would recommend removing the volumes: block here; use a local Node for day-to-day development, maybe configuring it to point at the Docker-hosted database.
If you also remove things that are set in the Dockerfile (command:) and things Compose can provide reasonable defaults for (image:, container_name:, networks:) you could reduce the docker-compose.yml file to:
version: '3.8'
services:
api: # without volumes:, networks:, image:, command:
build: api # shorthand corrected directory-only form
ports:
- '4000:4000'
depends_on:
- mysql
mysql: # without container_name:
image: mysql:5.7
restart: always
environment:
MYSQL_DATABASE: api
MYSQL_ROOT_USER: root
MYSQL_PASSWORD: 12345
MYSQL_ROOT_PASSWORD: root
ports:
- "3307:3306"
volumes:
- api_db:/var/lib/mysql
volumes:
api_db:
I have lubuntu 21.04 on my old PC. All is up to date. I install docker and docker-compose:
sudo apt install docker docker-compose
sudo systemctl enable --now docker
After that in home dir create folder web with my project. The structure of folder ~/web below:
.
├── docker-compose.yml
├── dockerfiles
│ ├── lg4
│ ├── test
│ └── test2
└── www
├── lg4
├── test
└── test2
All services have restart derictive in docker-compose.yml:
version: '3.7'
volumes:
mysql-volume:
networks:
app-shared:
driver: bridge
web_app-shared:
external: true
services:
php-httpd-lg4:
restart: always
build:
args:
user: lg4
uid: 1000
context: ./dockerfiles/lg4/
ports:
- 80:80
volumes:
- "./www/lg4:/var/www/html"
links:
- database
networks:
- app-shared
- web_app-shared
php-httpd-test:
restart: always
build:
args:
user: test
uid: 1000
context: ./dockerfiles/test/
ports:
- 82:80
volumes:
- "./www/test:/var/www/html"
links:
- database
networks:
- app-shared
- web_app-shared
php-httpd-test2:
restart: always
build:
args:
user: test
uid: 1000
context: ./dockerfiles/test2/
ports:
- 81:80
volumes:
- "./www/test2:/var/www/html"
links:
- database
networks:
- app-shared
- web_app-shared
database:
restart: always
image: mysql:5.7
restart: always
volumes:
- mysql-volume:/var/lib/mysql
ports:
- 3306:3306
networks:
- app-shared
- web_app-shared
environment:
TZ: "Europe/Moskow"
MYSQL_ALLOW_EMPTY_PASSWORD: "no"
MYSQL_ROOT_PASSWORD: "root"
MYSQL_USER: 'admin'
MYSQL_PASSWORD: 'admin'
MYSQL_DATABASE: 'lg4'
phpmyadmin:
restart: always
image: phpmyadmin/phpmyadmin
links:
- 'database:db'
ports:
- 8081:80
environment:
UPLOAD_LIMIT: 300M
networks:
- app-shared
- web_app-shared
All works fine when I run comand sudo docker-compose up -d from ~/web dir. But how can I start all this automatically on startup system without typing any commands in terminal every time?
Yes, docker has restart policies such as docker run --restart=always that will handle this. This is also available in the compose.yml config file as restart: always.
In order to enable a restart policy, you need to use the --restart argument when executing docker run.
In my case what I decided to do is to use the --restart flag with the unless-stopped argument, that way my containers would be restarted in case that they crash or even after a reboot. Here’s an example of the command that I had to use:
docker run -dit --restart unless-stopped httpd
If you had an already running container that you wanted to change the restart policy for, you could use the docker update command to change that:
docker update --restart unless-stopped container_id
For more information you could take a look at the official documentation here:
https://docs.docker.com/config/containers/start-containers-automatically/
I believe actually that the selected answer is not truly correct if you use docker-compose to start your containers. That answer will start Docker but does nothing for your docker-compose (i have several) containers.
What i did was as follows:
create a very simple start script
#!/bin/bash
cd Docker-compse-project
docker-compose up
do the same for a stop script
you can save those wherever you want but i save them alongside the docker-compose.yml
now on modern ubuntu images you create a service image in .etc.systemd/system I named mine for the main services so for me MyWebApp
so MyWebApp.service looks like so
[Unit]
Description=My Web Application
After=network.target
[Service]
Type=forking
User=MyUserToRun
Group=MyGroupToRun
ExecStart=/path/to/startScript
ExecStop=/path/to/stopScript
[Install]
WantedBy=multi-user.target
now you can enable your service
sudo systemctl enable MyWebApp.service
And you can start and stop the service as usual
I have a Golang project I am working and have multiple micro-services in the same code repository. My directory structure is roughly as follows:
├── pkg
├── cmd
│ ├── servicea
│ └── serviceb
├── internal
│ ├── servicea
│ └── serviceb
├── Makefile
├── scripts
│ └── protogen.sh
├── vendor
│ └── ...
├── go.mod
├── go.sum
└── readme.md
The main.go files for the respective services are in cmd/servicex/main.go
I've put the individual Dockerfiles for the services in cmd/servicex.
Roughly, this is how my Dockerfile looks like:
FROM golang:1.15.6
ARG version
COPY go.* <repo-path>
COPY pkg/ <repo-path>/pkg/
COPY internal/servicea internal/servicea
COPY vendor/ <repo-path>/vendor/
COPY cmd/servicea/ <repo-path>/cmd/servicea/
WORKDIR <repo-path>/cmd/servicea/
RUN GO111MODULE=on GOFLAGS=-mod=vendor CGO_ENABLED=0 GOOS=linux go build -v -ldflags "-X <repo-path>/cmd/servicea/main.version=$version" -a -installsuffix cgo -o servicea .
FROM alpine:3.12
RUN apk --no-cache add ca-certificates
WORKDIR /servicea/
COPY --from=0 <repo-path>/cmd/servicea .
EXPOSE 50051
ENTRYPOINT ["/servicea/servicea"]
I am using Scylla as my DB for this service and gRPC is the protocol for communication.
This is my docker-compose.yml for this service.
version: '3'
services:
db:
container_name: servicedb
image: scylladb/scylla
hostname: db
environment:
GET_HOST_FROM: dns
SCYLLA_USER: <user>
SCYLLA_PASS: <password>
ports:
- 9042:9042
networks:
- serviceanet
servicea:
container_name: servicea
image: servicea-production:latest
hostname: servicea
build:
context: .
dockerfile: Dockerfile
environment:
GET_HOSTS_FROM: dns
networks:
- serviceanet
volumes:
- .:<repo-path>
ports:
- 50051:50051
depends_on:
- db
links:
- db
labels:
kompose.service.type: LoadBalancer
networks:
serviceannet:
driver: bridge
I am using kompose to generate the corresponding kubernetes yaml files.
However, when I run the compose locally or try to deploy it on minikube/GKE, my service instance is not able to connect to my DB and I get an error like this:
failed to create scylla session, gocql: unable to create session: control: unable to connect to initial hosts: dial tcp 127.0.0.1:9042: connect: connection refused
Otherwise, if I run a local scylla docker instance with the following command:
docker run --name some-scylla -p 9042:9042 -d scylladb/scylla --broadcast-address 127.0.0.1 --listen-address 0.0.0.0 --broadcast-rpc-address 127.0.0.1
and then do a go run cmd/servicea/main.go my service seems to be running and the API endpoints are working(verified with Evans).
127.0.0.1 (localhost) is the host/container on which your service is running. in the case of multiple containers (either using docker-compose or k8s) they would have different IP addresses and 127.0.0.1 would correspond to different hosts depending where you are connecting from. In your gocql initialization, provide the db address using a configuration/environment variable. docker-compose will automatically configure a hostname for the db container and with k8s you can use its service discovery mechanism.
I have two different projects with the same docker configuration (docker-compose.yml), but different files.
├── a
│ ├── docker-compose.yml
│ └── Dockerfile
└── b
├── docker-compose.yml
└── Dockerfile
How is it possible to build containers with the same name but for different projects? I don't want to think of new names for each project if I work only on one project at the same time.
ERROR: for mysql Cannot create container for service mysql: Conflict. The container name "/mysql" is already in use by container "90a84268d483ec2bd5cf0feb7ab1972384941ba255a671c0cfd2b4017ce90682". You have to remove (or rename) that container to be able to reuse that name.
Docker-compose.yml
version: '3'
services:
nginx:
image: nginx:stable-alpine
container_name: nginx
ports:
- "8080:80"
volumes:
- ./src:/var/www/html
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
depends_on:
- php
- mysql
networks:
- laravel
mysql:
image: mysql:8.0.19
container_name: mysql
restart: unless-stopped
tty: true
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: homestead
MYSQL_USER: homestead
MYSQL_PASSWORD: secret
MYSQL_ROOT_PASSWORD: secret
SERVICE_TAGS: dev
SERVICE_NAME: mysql
networks:
- laravel
php:
build:
context: .
dockerfile: Dockerfile
container_name: php
volumes:
- ./src:/var/www/html
ports:
- "9000:9000"
networks:
- laravel
Don't override container_name.