bluemix docker-compose String indices must be integers - docker

I'm trying to upload two containers to Bluemix using docker-compose:
docker-compose -f docker-compose-bluemix.yml up -d
My docker-compose-bluemix.yml file is:
api:
image: registry.eu-gb.bluemix.net/mycompany/java
container_name: java-identity-verification-sdk-container
ports:
- 8080:8080
volumes:
- java-identity-verification-sdk:/data
links:
- mongo
mongo:
image: registry.eu-gb.bluemix.net/mycompany/mongo
container_name: mongo-identity-verification-sdk-container
volumes:
- mongo-identity-verification-sdk:/data/db
ports:
- 27017:27017
There are no special characters in docker-compose-bluemix.yml (like tabs).
The images were previously uploaded to Bluemix, and the two volumes java-identity-verification-sdk and mongo-identity-verification-sdk were also created.
I get this error:
Starting ongo-identity-verification-sdk-container
Creating java-identity-verification-sdk-container
ERROR: for api string indices must be integers
Traceback (most recent call last):
File "bin/docker-compose", line 3, in <module>
File "compose/cli/main.py", line 64, in main
File "compose/cli/main.py", line 116, in perform_command
File "compose/cli/main.py", line 876, in up
File "compose/project.py", line 416, in up
File "compose/parallel.py", line 66, in parallel_execute
TypeError: string indices must be integers
Failed to execute script docker-compose
Why?
(by the way, why does it say "Starting ongo-identity-verification-sdk-container"? it should be "mongo", not "ongo")

The error message is Compose's way of saying "something went wrong".
From looking at the compose file, my guess is that you need to declare the volumes as external, so that compose uses the ones already there instead of trying to create them. (This is presuming that you've precreated the volumes with cf ic volume create - if you haven't, you need to do that first as well)
e.g. add a stanza like:
volumes:
java-identity-verification-sdk:
external: true
mongo-identity-verification-sdk:
external: true
as to the missing first letter - looks like a bug.

Related

How to parameterize ports for docker-compose?

I am trying to parameterize a docker-compose file using .env. Doc
docker-compose.yml
version: '2.3'
networks:
default: { external: true, name: $NETWORK_NAME }
services:
rabbitmq_local:
image: 'rabbitmq:3.6-management-alpine'
ports:
# The standard AMQP protocol port
- ${RABBIT_PORT}:5672
# HTTP management UI
- '15672:15672'
.env file
NETWORK_NAME=uv_atp_network
RABBIT_PORT=5672
RABBIT_HTTP_MANAGEMENT_PORT=15672
Parameterizing NETWORK_NAME works, but parameterizing RABBIT_PORT doesn't, with
The Compose file 'docker-compose.yml' is invalid because:
services.rabbitmq_local.ports contains an invalid type, it should be a number, or an object
This makes me suspect RABBIT_PORT is interpreted as string rather than a number.
How can I parameterize it correctly?
EDIT
I found that forcing the variable to be mandatory
- ${RABBIT_PORT:?unspecified_rabbit_port}:5672
gives the error, meaning it is unset or empty.
What am I doing wrong?
It seems that when running with pytest and pytest-docker-compose the .env file has to be in the root folder of pytest, along with the pytest.ini file.
Running from docker-compose from command-line doesn't have that limitation in docker 1.24.
After relocating the file, the variables could be resolved.

ModuleNotFoundError: No module named 'httplib2' - using BigQueryToCloudStorageOperator in Airflow

I'm trying to create an airflow (1.10.9) pipline, I'm using the puckel docker image (I'm working with the local docker-compose.yml every thing works well until I tried to import the BigQueryToCloudStorageOperator
from airflow.contrib.operators.bigquery_to_gcs import BigQueryToCloudStorageOperator
I get this exception :
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/airflow/models/dagbag.py", line 243, in process_file m = imp.load_source(mod_name, filepath)
File "/usr/local/lib/python3.7/imp.py", line 171, in load_source module = _load(spec)
File "<frozen importlib._bootstrap>", line 696, in _load
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/usr/local/airflow/dags/coo_dag.py", line 6, in <module> from airflow.contrib.operators.bigquery_to_gcs import BigQueryToCloudStorageOperator
File "/usr/local/lib/python3.7/site-packages/airflow/contrib/operators/bigquery_to_gcs.py", line 20, in <module> from airflow.contrib.hooks.bigquery_hook import BigQueryHook
File "/usr/local/lib/python3.7/site-packages/airflow/contrib/hooks/bigquery_hook.py", line 34, in <module> from airflow.contrib.hooks.gcp_api_base_hook import GoogleCloudBaseHook
File "/usr/local/lib/python3.7/site-packages/airflow/contrib/hooks/gcp_api_base_hook.py", line 25, in <module> import httplib2
ModuleNotFoundError: No module named 'httplib2'
I tried to install the pakage apache-airflow[gcp]==1.10.9 either manuelly (by accessing the the aiflow webserver machine and running pip install) or by mounting a file (requirements.txt ) as a volume but it doesn't work
(when I mount the file as volume, the webserver machine doesn't start It cannot install the requirments.
here is the docker-compose.yml that I'm using :
version: '3.7'
services:
postgres:
image: postgres:9.6
environment:
- POSTGRES_USER=airflow
- POSTGRES_PASSWORD=airflow
- POSTGRES_DB=airflow
logging:
options:
max-size: 10m
max-file: "3"
webserver:
image: puckel/docker-airflow:1.10.9
restart: always
depends_on:
- postgres
environment:
- LOAD_EX=y
- EXECUTOR=Local
logging:
options:
max-size: 10m
max-file: "3"
volumes:
- ./dags:/usr/local/airflow/dags
# - ./requirements.txt:/requirements.txt
ports:
- "8080:8080"
command: webserver
healthcheck:
test: ["CMD-SHELL", "[ -f /usr/local/airflow/airflow-webserver.pid ]"]
interval: 30s
timeout: 30s
retries: 3
and here is the content of the file requirements.txt :
apache-airflow[gcp]==1.10.9
To mount the requirements.txt file as a volume inside the container, the file has to be in the same directory as the docker-compose.yml file for the relative path to work. Consider correcting the indentation of the mounted volumes in the yml file as shown below.
volumes:
- ./dags:/usr/local/airflow/dags
- ./requirements.txt:/requirements.txt
I have also added some more dependencies to requirements.txt which are required for the BigQueryToCloudStorageOperator() task to work.
Below is the contents of requirements.txt
pandas==0.25.3
pandas-gbq==0.14.1
apache-airflow[gcp]==1.10.9
In case your previous Airflow instance is already running, consider running a sudo docker-compose stop first before you compose again (sudo docker-compose up) .
Also, the bigquery_default connection in Airflow should be edited to add the correct GCP project_id and service account json key.

Docker parsing error whilst setting up MediaWiki on RaspberryPi3

Just for some background, I am setting up mediawiki on a raspberry pi3 for a personal learning project.
I have followed the guide from https://peppe8o.com/personal-mediawiki-with-raspberry-pi-and-docker/ and have been able to follow all but the very last step of running 'docker-compose up -d' and get the error below (I have also pasted the contents of my docker-compose.yml)
I would greatly appreciate if anyone could spot the issue here as I have tried a number of things
(removing and adding spaces in lines 6 & 17 etc....)
pi#raspberrypi:~/mediawiki $ docker-compose up -d
ERROR: yaml.parser.ParserError: while parsing a block mapping
in "./docker-compose.yml", line 6, column 3
expected <block end>, but found '-'
in "./docker-compose.yml", line 17, column 3
Contents of docker-compose.yml:
# My MediaWiki
# from peppe8o.com
version: '3'
services:
mediawiki:
image: mediawiki
restart: unless-stopped
ports:
- 8080:80
links:
- database
volumes:
- mediawiki-www:/var/www/html
#After initial setup, download LocalSettings.php to the same directory as
#this yaml and uncomment the following line and use compose to restart
#the mediawiki service
- ./LocalSettings.php:/var/www/html/LocalSettings.php
database:
build: .
restart: unless-stopped
volumes:
- mediawiki-db:/var/lib/mysql
volumes:
mediawiki-www:
mediawiki-db:
Kind regards
Layerz

Volume odoo. Permission issue

Ubuntu 18.04. I am using odoo docker files
docker-compose:
version: '3.7'
services:
web:
build: ./build
# image: odoo:13.0
# user: root
depends_on:
- mydb
ports:
- "18275:8069"
environment:
- HOST=mydb
- USER= us
- PASSWORD=pw
restart: always
volumes:
- ./odoo:/usr/lib/python3/dist-packages/odoo
- ./config:/etc/odoo
- ./extra-addons:/mnt/extra-addons
mydb:
image: postgres:12.1
environment:
- POSTGRES_DB=postgres
- POSTGRES_PASSWORD=pw
- POSTGRES_USER=us
restart: always
In ./build directory I have docker files from odoo github repository.
I have problems with volumes: ./odoo:/usr/lib/python3/dist-packages/odoo
My odoo container is restarting with logs:
web_1 | Traceback (most recent call last):
web_1 | File "/usr/bin/odoo", line 8, in <module>
web_1 | odoo.cli.main()
web_1 | AttributeError: module 'odoo' has no attribute 'cli'
I think it's permission issue. I added some permission, I changed user and group owner and nothing...
What should I do to create this volume?
Without this one volume everything works great
Sorry my answer is so late - maybe we can help someone else who has this error.
Consider how simple Odoo-bin is:
#!/usr/bin/env python3
# set server timezone in UTC before time module imported
__import__('os').environ['TZ'] = 'UTC'
import odoo
if __name__ == "__main__":
This error: "odoo has no attribute 'cli'" can happen if the odoo program files are not where Odoo-bin expects them to be. The fifth line in Odoo-bin is 'import odoo', and if it isn't there, you will get this error.
And as you have guessed, if your odoo user doesn't have permissions to READ the odoo files, Odoo-bin will also throw this error when it cannot import from a folder it cannot even see.

Docker Compose in Bluemix KeyError Message

I get an error when deploying 2 services in Bluemix using docker-compose:
Creating xxx
ERROR: for xxx-service 'message'
Traceback (most recent call last):
File "bin/docker-compose", line 3, in <module>
File "compose/cli/main.py", line 64, in main
File "compose/cli/main.py", line 116, in perform_command
File "compose/cli/main.py", line 876, in up
File "compose/project.py", line 416, in up
File "compose/parallel.py", line 66, in parallel_execute
KeyError: 'message'
Failed to execute script docker-compose
My docker-compose file (that perfectly runs in local) is:
yyy-service:
image: yyy
container_name: wp-docker
hostname: wp-docker
ports:
- 8080:80
environment:
WORDPRESS_DB_PASSWORD: whatever
volumes:
- "~/whatever/:/var/www/html/wp-content"
links:
- xxx-service
xxx-service:
image: xxx
container_name: wp-mysql
hostname: wp-mysql
environment:
MYSQL_ROOT_PASSWORD: whatever
MYSQL_DATABASE: whatever
MYSQL_USER: root
MYSQL_PASSWORD: whatever
volumes:
- /var/data/whatever:/var/lib/mysql
The question is very similar to this one, but I see no solution, except for trying
export COMPOSE_HTTP_TIMEOUT=300
which hasn't worked for me.
Unfortunately, docker-compose eats the actual error messages returned and gives you a helpful stack trace of their python script with no info about the underlying cause.
From your compose file, my guess is that the issue is with your volumes. You've specced it to mount directories on your compute host directly into your containers. That won't work in Bluemix - instead you need to specify that the volumes are external (and create those first), then point to them.
For example, something like:
version: '2'
services:
test:
image: registry.ng.bluemix.net/ibmliberty
volumes:
- test:/tmp/data:rw
volumes:
test:
external: true
where you create the volume (in this case, "test") first with something like cf ic volume create test

Resources