I get an error when deploying 2 services in Bluemix using docker-compose:
Creating xxx
ERROR: for xxx-service 'message'
Traceback (most recent call last):
File "bin/docker-compose", line 3, in <module>
File "compose/cli/main.py", line 64, in main
File "compose/cli/main.py", line 116, in perform_command
File "compose/cli/main.py", line 876, in up
File "compose/project.py", line 416, in up
File "compose/parallel.py", line 66, in parallel_execute
KeyError: 'message'
Failed to execute script docker-compose
My docker-compose file (that perfectly runs in local) is:
yyy-service:
image: yyy
container_name: wp-docker
hostname: wp-docker
ports:
- 8080:80
environment:
WORDPRESS_DB_PASSWORD: whatever
volumes:
- "~/whatever/:/var/www/html/wp-content"
links:
- xxx-service
xxx-service:
image: xxx
container_name: wp-mysql
hostname: wp-mysql
environment:
MYSQL_ROOT_PASSWORD: whatever
MYSQL_DATABASE: whatever
MYSQL_USER: root
MYSQL_PASSWORD: whatever
volumes:
- /var/data/whatever:/var/lib/mysql
The question is very similar to this one, but I see no solution, except for trying
export COMPOSE_HTTP_TIMEOUT=300
which hasn't worked for me.
Unfortunately, docker-compose eats the actual error messages returned and gives you a helpful stack trace of their python script with no info about the underlying cause.
From your compose file, my guess is that the issue is with your volumes. You've specced it to mount directories on your compute host directly into your containers. That won't work in Bluemix - instead you need to specify that the volumes are external (and create those first), then point to them.
For example, something like:
version: '2'
services:
test:
image: registry.ng.bluemix.net/ibmliberty
volumes:
- test:/tmp/data:rw
volumes:
test:
external: true
where you create the volume (in this case, "test") first with something like cf ic volume create test
Related
I'm trying to create an airflow (1.10.9) pipline, I'm using the puckel docker image (I'm working with the local docker-compose.yml every thing works well until I tried to import the BigQueryToCloudStorageOperator
from airflow.contrib.operators.bigquery_to_gcs import BigQueryToCloudStorageOperator
I get this exception :
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/airflow/models/dagbag.py", line 243, in process_file m = imp.load_source(mod_name, filepath)
File "/usr/local/lib/python3.7/imp.py", line 171, in load_source module = _load(spec)
File "<frozen importlib._bootstrap>", line 696, in _load
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/usr/local/airflow/dags/coo_dag.py", line 6, in <module> from airflow.contrib.operators.bigquery_to_gcs import BigQueryToCloudStorageOperator
File "/usr/local/lib/python3.7/site-packages/airflow/contrib/operators/bigquery_to_gcs.py", line 20, in <module> from airflow.contrib.hooks.bigquery_hook import BigQueryHook
File "/usr/local/lib/python3.7/site-packages/airflow/contrib/hooks/bigquery_hook.py", line 34, in <module> from airflow.contrib.hooks.gcp_api_base_hook import GoogleCloudBaseHook
File "/usr/local/lib/python3.7/site-packages/airflow/contrib/hooks/gcp_api_base_hook.py", line 25, in <module> import httplib2
ModuleNotFoundError: No module named 'httplib2'
I tried to install the pakage apache-airflow[gcp]==1.10.9 either manuelly (by accessing the the aiflow webserver machine and running pip install) or by mounting a file (requirements.txt ) as a volume but it doesn't work
(when I mount the file as volume, the webserver machine doesn't start It cannot install the requirments.
here is the docker-compose.yml that I'm using :
version: '3.7'
services:
postgres:
image: postgres:9.6
environment:
- POSTGRES_USER=airflow
- POSTGRES_PASSWORD=airflow
- POSTGRES_DB=airflow
logging:
options:
max-size: 10m
max-file: "3"
webserver:
image: puckel/docker-airflow:1.10.9
restart: always
depends_on:
- postgres
environment:
- LOAD_EX=y
- EXECUTOR=Local
logging:
options:
max-size: 10m
max-file: "3"
volumes:
- ./dags:/usr/local/airflow/dags
# - ./requirements.txt:/requirements.txt
ports:
- "8080:8080"
command: webserver
healthcheck:
test: ["CMD-SHELL", "[ -f /usr/local/airflow/airflow-webserver.pid ]"]
interval: 30s
timeout: 30s
retries: 3
and here is the content of the file requirements.txt :
apache-airflow[gcp]==1.10.9
To mount the requirements.txt file as a volume inside the container, the file has to be in the same directory as the docker-compose.yml file for the relative path to work. Consider correcting the indentation of the mounted volumes in the yml file as shown below.
volumes:
- ./dags:/usr/local/airflow/dags
- ./requirements.txt:/requirements.txt
I have also added some more dependencies to requirements.txt which are required for the BigQueryToCloudStorageOperator() task to work.
Below is the contents of requirements.txt
pandas==0.25.3
pandas-gbq==0.14.1
apache-airflow[gcp]==1.10.9
In case your previous Airflow instance is already running, consider running a sudo docker-compose stop first before you compose again (sudo docker-compose up) .
Also, the bigquery_default connection in Airflow should be edited to add the correct GCP project_id and service account json key.
I know that there has been others who have asked this question on here before, however, I have gone through them and have tried the suggestions. I believe that its a complex issue because everyone's files look different and varies from the other based on placements and paths, which I am not familiar yet in Docker. Now, when I run on docker-compose build, the program tells me that
Building server
Traceback (most recent call last): File "compose/cli/main.py", line
67, in main File "compose/cli/main.py", line 126, in perform_command
File "compose/cli/main.py", line 302, in build File
"compose/project.py", line 468, in build File "compose/project.py",
line 450, in build_service File "compose/service.py", line 1147, in
build compose.service.BuildError: (<Service: server>, {'message':
'Cannot locate specified Dockerfile: ./client/Dockerfile'})
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "docker-compose", line 3, in
File "compose/cli/main.py", line 78, in main TypeError: can
only concatenate str (not "dict") to str [34923] Failed to execute
script docker-compose
I have tried placing the Dockerfile from the client to the same directory as the docker-compose.yml file to eliminate path discrepencies, however, it still says the same thing. Please let me know if you have any suggestions. Thanks!
Here is my docker-compose.yml file
version: "3.7"
services:
server:
build:
context: ./server
dockerfile: ./client/Dockerfile
image: myapp-server
container_name: myapp-node-server
command: /usr/src/app/node_modules/.bin/nodemon server.js
volumes:
- ./server/:/usr/src/app
- /usr/src/app/node_modules
ports:
- "5050:5050"
depends_on:
- mongo
env_file: ./server/.env
environment:
- NODE_ENV=development
networks:
- app-network
mongo:
image: mongo
volumes:
- data-volume:/data/db
ports:
- "27017:27017"
networks:
- app-network
client:
build:
context: ./client
dockerfile: Dockerfile
image: myapp-client
container_name: myapp-react-client
command: npm start
volumes:
- ./client/:/usr/app
- /usr/app/node_modules
depends_on:
- server
ports:
- "3000:3000"
networks:
- app-network
networks:
app-network:
driver: bridge
volumes:
data-volume:
node_modules:
web-root:
driver: local
Here is the Dockerfile in the client folder
FROM node:10.16-alpine
WORKDIR /usr/src/app
COPY package.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
Here is the Dockerfile in the server folder
FROM node:10.16-alpine
# Create App Directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install Dependencies
COPY package*.json ./
RUN npm install --silent
# Copy app source code
COPY . .
# Exports
EXPOSE 5050
CMD ["npm","start"]
EDIT 1: The issue was having an unusual path to the dockerfiles: client/docker-mern-basic. You can see this in the VSCode file explorer for the client paths. Resolved by making paths and context/dockerfile paths consistent, eliminating the extra docker-mern-basic path. See comments below.
EDIT 0: this doesn't solve the issue, I'll remove this if I can't find any other possible issues.
Your path for the server.build.dockerfile isn't relative to your context. You're providing the folder to use as "root" as server so Docker is actually looking for the path ./server/client/Dockerfile.
I think your issue is not giving a path relative to your context:
services:
server:
build:
context: ./server
dockerfile: Dockerfile
Just for some background, I am setting up mediawiki on a raspberry pi3 for a personal learning project.
I have followed the guide from https://peppe8o.com/personal-mediawiki-with-raspberry-pi-and-docker/ and have been able to follow all but the very last step of running 'docker-compose up -d' and get the error below (I have also pasted the contents of my docker-compose.yml)
I would greatly appreciate if anyone could spot the issue here as I have tried a number of things
(removing and adding spaces in lines 6 & 17 etc....)
pi#raspberrypi:~/mediawiki $ docker-compose up -d
ERROR: yaml.parser.ParserError: while parsing a block mapping
in "./docker-compose.yml", line 6, column 3
expected <block end>, but found '-'
in "./docker-compose.yml", line 17, column 3
Contents of docker-compose.yml:
# My MediaWiki
# from peppe8o.com
version: '3'
services:
mediawiki:
image: mediawiki
restart: unless-stopped
ports:
- 8080:80
links:
- database
volumes:
- mediawiki-www:/var/www/html
#After initial setup, download LocalSettings.php to the same directory as
#this yaml and uncomment the following line and use compose to restart
#the mediawiki service
- ./LocalSettings.php:/var/www/html/LocalSettings.php
database:
build: .
restart: unless-stopped
volumes:
- mediawiki-db:/var/lib/mysql
volumes:
mediawiki-www:
mediawiki-db:
Kind regards
Layerz
Ubuntu 18.04. I am using odoo docker files
docker-compose:
version: '3.7'
services:
web:
build: ./build
# image: odoo:13.0
# user: root
depends_on:
- mydb
ports:
- "18275:8069"
environment:
- HOST=mydb
- USER= us
- PASSWORD=pw
restart: always
volumes:
- ./odoo:/usr/lib/python3/dist-packages/odoo
- ./config:/etc/odoo
- ./extra-addons:/mnt/extra-addons
mydb:
image: postgres:12.1
environment:
- POSTGRES_DB=postgres
- POSTGRES_PASSWORD=pw
- POSTGRES_USER=us
restart: always
In ./build directory I have docker files from odoo github repository.
I have problems with volumes: ./odoo:/usr/lib/python3/dist-packages/odoo
My odoo container is restarting with logs:
web_1 | Traceback (most recent call last):
web_1 | File "/usr/bin/odoo", line 8, in <module>
web_1 | odoo.cli.main()
web_1 | AttributeError: module 'odoo' has no attribute 'cli'
I think it's permission issue. I added some permission, I changed user and group owner and nothing...
What should I do to create this volume?
Without this one volume everything works great
Sorry my answer is so late - maybe we can help someone else who has this error.
Consider how simple Odoo-bin is:
#!/usr/bin/env python3
# set server timezone in UTC before time module imported
__import__('os').environ['TZ'] = 'UTC'
import odoo
if __name__ == "__main__":
This error: "odoo has no attribute 'cli'" can happen if the odoo program files are not where Odoo-bin expects them to be. The fifth line in Odoo-bin is 'import odoo', and if it isn't there, you will get this error.
And as you have guessed, if your odoo user doesn't have permissions to READ the odoo files, Odoo-bin will also throw this error when it cannot import from a folder it cannot even see.
I'm trying to upload two containers to Bluemix using docker-compose:
docker-compose -f docker-compose-bluemix.yml up -d
My docker-compose-bluemix.yml file is:
api:
image: registry.eu-gb.bluemix.net/mycompany/java
container_name: java-identity-verification-sdk-container
ports:
- 8080:8080
volumes:
- java-identity-verification-sdk:/data
links:
- mongo
mongo:
image: registry.eu-gb.bluemix.net/mycompany/mongo
container_name: mongo-identity-verification-sdk-container
volumes:
- mongo-identity-verification-sdk:/data/db
ports:
- 27017:27017
There are no special characters in docker-compose-bluemix.yml (like tabs).
The images were previously uploaded to Bluemix, and the two volumes java-identity-verification-sdk and mongo-identity-verification-sdk were also created.
I get this error:
Starting ongo-identity-verification-sdk-container
Creating java-identity-verification-sdk-container
ERROR: for api string indices must be integers
Traceback (most recent call last):
File "bin/docker-compose", line 3, in <module>
File "compose/cli/main.py", line 64, in main
File "compose/cli/main.py", line 116, in perform_command
File "compose/cli/main.py", line 876, in up
File "compose/project.py", line 416, in up
File "compose/parallel.py", line 66, in parallel_execute
TypeError: string indices must be integers
Failed to execute script docker-compose
Why?
(by the way, why does it say "Starting ongo-identity-verification-sdk-container"? it should be "mongo", not "ongo")
The error message is Compose's way of saying "something went wrong".
From looking at the compose file, my guess is that you need to declare the volumes as external, so that compose uses the ones already there instead of trying to create them. (This is presuming that you've precreated the volumes with cf ic volume create - if you haven't, you need to do that first as well)
e.g. add a stanza like:
volumes:
java-identity-verification-sdk:
external: true
mongo-identity-verification-sdk:
external: true
as to the missing first letter - looks like a bug.