When attempting to connect to Redshift from the latest liquibase/liquibase Docker image v4.17.0, Liquibase returns an error:
Unexpected error running Liquibase: Driver class was not specified and could not be determined from the url (jdbc:redshift://aaaa.aaaa.eu-west-2.redshift.amazonaws.com:dddd:/aaaa).
This does not occur with the command-line version of Liquibase v4.17.0, the Redshift driver class is detected and connection works.
When the driver is specified and stored inside the container, Liquibase errors stating it cannot find the database driver, regardless of whether the driver class is specified as com.amazon.redshift.Driver (as specified in the driver JAR) or com.amazon.redshift.jdbc42.Driver (as specified in AWS docs):
Unexpected error running Liquibase: Cannot find database driver: com.amazon.redshift.Driver
Unexpected error running Liquibase: Cannot find database driver: com.amazon.redshift.jdbc42.Driver
Dockerfile:
FROM liquibase/liquibase:latest
COPY entry.sh /entry.sh
ADD https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.9/redshift-jdbc42-2.1.0.9.jar lib/redshift-jdbc42-2.1.0.9.jar
ADD https://github.com/liquibase/liquibase-redshift/releases/download/v4.17.0/liquibase-redshift-4.17.0.jar lib/liquibase-redshift-4.17.0.jar
COPY liquibase.properties liquibase.properties
ENTRYPOINT ["/entry.sh"]
Commaned executed on container (excluding credentials, url, etc.):
docker-entrypoint.sh --defaultsFile=liquibase.properties --classpath=lib/redshift-jdbc42-2.1.0.9.jar
Is there a way to connect from a Liquibase Docker container to Redshift?
ADD in the Dockerfile was extracting the .jar files. Switching to RUN wget fixed the issue.
Related
I followed the instructions to initialize breeze environment: https://github.com/apache/airflow/blob/main/CONTRIBUTORS_QUICK_START.rst#setting-up-breeze
Seems like the image is built but failed to start.Something wrong with my environment?
Good version of docker 20.10.9.
Python version: 3.8
Backend: mysql
No need to rebuild the image: none of the important files changed
Use CI image.
Branch name: main
Docker image: ghcr.io/apache/airflow/main/ci/python3.8
Airflow source version: 2.3.0.dev0
Python version: 3.8
Backend: mysql 5.7
docker: Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: rootfs_linux.go:76: mounting "/mnt/c/Users/binglilun/source/repos/doowhtron/airflow/scripts/in_container/entrypoint_ci.sh" to rootfs at "/entrypoint" caused: mount through procfd: not a directory: unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type.
The problem is fixed.
Key things for running breeze on my wsl environment are:
Upgrade to wsl2(my distribution is ubuntu)
checkout the source to ~/ instead of /mnt/c/
run docker daemon inside wsl2(not using docker destop)
enable wsl's network by setting /etc/resolv.conf(putting nameserver 8.8.8.8)
add "--network host" parameter to docker_v build(_build_images.sh), otherwise internet cannot be connected
install yarn and add "--ignore-engines" to yarn install (compile_assets.sh)
And breeze works for me now.
I just reinstalled Fabric Samples v2.2.0 from Hyperledger Fabric repository according to the documentation.
But when I try to run asset-transfer-basic application located in fabric-samples/asset-transfer-basic/application-javascript directory by running node app.js the wallet is created and an admin and user is registered. But then it tries to invoke the function as given in app.js and shows this error
error: [Transaction]: Error: No valid responses from any peers. Errors:
peer=peer0.org1.example.com:7051, status=500, message=error in simulation: failed to execute transaction
aa705c10403cb65cecbd360c13337d03aac97a8f233a466975773586fe1086f6: could not launch chaincode basic_1.0:b359a077730d7
f44d6a437ad49d1da951f6a01c6d1eed4f85b8b1f5a08617fe7: error starting container: error starting container:
API error (404): network _test not found
Response of a transaction to invoke a function
This error never occured before. But somehow after reinstalling docker and Hyperledger Fabric fabric-samples it never seems to find the network _test.
N.B. : Before reinstalling name of the network was net_test. But now when I try docker network ls it shows a network called docker_test. I am using Windows Subsystem for Linux (WSL) version 1.
NETWORK ID NAME DRIVER SCOPE
b7ac05456f46 bridge bridge local
acaa5856b871 docker_test bridge local
866f58b9078d host host local
4812f94efb15 none null local
How can I fix the issue occurring when I try to run the application?
In my opinion, the CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE setting seems to be wrong.
you can check docker-compose.yaml or core.yaml
1. docker-compose.yaml
I will explain fabric-samples/test-network as targeting according to your current situation.
You can check in CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE in docker-compose.yaml
Perhaps in your case(fabric-samples/test-network), the value of ${COMPOSE_PROJECT_NAME} was not set properly, so it was set to _test.
Make sure the value is set correctly and change it to your network name.
# hyperledger/fabric-samples/test-network/docker/docker-compose-test-net.yaml
# based v2.2
...
peer0.org1.example.com:
container_name: peer0.org1.example.com
image: hyperledger/fabric-peer:2.2
environment:
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
# - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=${COMPOSE_PROJECT_NAME}_test
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=docker_test
...
2. core.yaml
If you have not set the value in the docker-compose.yaml peer, you need to check the core.yaml referenced by the peer.
you can find the networkMode parameter in core.yaml
# core.yaml
...
vm:
docker:
hostConfig:
# NetworkMode: host
NetworkMode: docker_test
...
If neither is set, it will be set to the default value. However, as you see _test being logged, the wrong value have been set in one of the two section, and you need to correct the value to the value you intended.
This issue is related to docker networking. In complete to #nezuko-response.
Create a file and name it ".env" in the same directory where your docker-compose file exists.
Add the following line in it:
COMPOSE_PROJECT_NAME=net
Use docker-compose up to update the container with the new configurations.
Or bring the HL network down (./network.sh down) and up (./network.sh up), restarting the test-nework.
Otherwise you'll still get the same error even after creating ".env" file.
More explanation about docker networking
run ./network down
then
export COMPOSE_PROJECT_NAME=net
afterwards
./network start
I copied this from someone .This one worked for me !!
Please create a file named ".env" in the same directory where your docker-compose file exists. Add the following line in ".env" file:-
COMPOSE_PROJECT_NAME=net
This worked for me
export COMPOSE_PROJECT_NAME=net
I'm using gradle-6.5 and when I build my app on my laptop all builds well, but if I try to run the same command on docker, some tests are failed or something is going wrong.
I have an exception like the following:
Exception is:
org.gradle.api.tasks.TaskExecutionException: Execution failed for task ':test'.
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.lambda$executeIfValid$1(ExecuteActionsTaskExecuter.java:207)
at org.gradle.internal.Try$Failure.ifSuccessfulOrElse(Try.java:263)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeIfValid(ExecuteActionsTaskExecuter.java:205)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:186)
...
Caused by: org.gradle.api.GradleException: There were failing tests. See the report at: file:///tmp/reports/tests/test/index.html
at org.gradle.api.tasks.testing.AbstractTestTask.handleTestFailures(AbstractTestTask.java:628)
at org.gradle.api.tasks.testing.AbstractTestTask.executeTests(AbstractTestTask.java:499)
at org.gradle.api.tasks.testing.Test.executeTests(Test.java:646)
and I want to know if some way to write a text that placed in an index.html file to console or maybe copy this file to my laptop.
For the build my app in the docker I use the following command:
docker build -t myapp .
You want to get the build/reports/tests/test/ directory which contains the test reports (e.g., index.html) onto your local machine. You must use a docker-compose.yml to mirror the the relevant directory:
version: '3.8'
services:
chat:
build:
dockerfile: Dockerfile
context: .
command: gradle run
working_dir: /home/gradle/project
volumes:
- type: bind
source: ./build/reports/tests/test
target: /home/gradle/project/build/reports/tests/test
There is a simpler way of getting the logs without extra docker-compose file.
First, you should find the id of the stopped container. Type docker ps -a in your terminal to get the list of all containers including the ones that have the status "exited". Find the one that you are interested in and copy the container id.
Second, copy the files from the container to your host. Type docker cp {copied container id}:home/gradle/src/build/reports/tests/test/ ./{location where you want to save your logs on your machine}.
Third, open the location you specified previously, open the index.html file and enjoy full logs output.
I've been running a dev-setup for a while without issue. I'm using Docker for Windows with Windows Subsystem for Linux 2. It's been working very well. Today when trying to spin up docker-compose, it failed with the following error:
frederik#desktop:~/projects/caselab$ docker-compose -f docker-test.yml up
Recreating f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_caselab_db_1 ...
Recreating f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_caselab_db_1 ... error
ERROR: for f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_caselab_db_1 Cannot create container for service db: mkdir 07ff2055c618dedc240ca3275de3f8c41d091136dc659cf463ee9fc62eed1853: permission denied
ERROR: for db Cannot create container for service db: mkdir 07ff2055c618dedc240ca3275de3f8c41d091136dc659cf463ee9fc62eed1853: permission denied
ERROR: Encountered errors while bringing up the project.
frederik#desktop:~/projects/caselab$
I shaved the contents of docker-test.yml down to simply:
version: '3'
services:
db:
image: postgres
logging:
driver: none
I tried running docker run postgres which worked without issue. I then tried copying all the contents of my folder to another folder. Now, running docker-compose -f docker-test.yml works without issues.
I think it's somehow related to permissions, though I can see no difference in permissions between the original folder and the new one.
As I do most of my editing in Visual Studio Code, running in Windows I'm thinking it may be related to the Windows / Linux boundary, though I'm not completely sure how. And - again - this setup has been running for months without issue so I'm at a loss for what I could have changed.
Any ideas?
I managed to solve it.
I noticed that running docker-compose up prepended a hash to the image name every single time the command was run. This resulted in a comically long image name.
Running docker-compose images showed this image being present.
Simply running docker-compose rm removed the image, which allowed the right image to be created and run.
I have filed this as a bug in docker-compose.
I am attempting to add a volume to a Docker container that will be built and run in a Docker Compose system on a hosted build service (CircleCI). It works fine locally, but not remotely. CircleCI provide an SSH facility I can use to debug why a container is not behaving as expected.
The relevant portion of the Docker Compose file is thus:
missive-mongo:
image: missive-mongo
command: mongod -v --logpath /var/log/mongodb/mongodb.log --logappend
volumes:
- ${MONGO_LOCAL}:/data/db
- ${LOGS_LOCAL_PATH}/mongo:/var/log/mongodb
networks:
- storage_network
Locally, if I do docker inspect integration_missive-mongo_1 (i.e. the running container name, I will get the volumes as expected:
...
"HostConfig": {
"Binds": [
"/tmp/missive-volumes/logs/mongo:/var/log/mongodb:rw",
"/tmp/missive-volumes/mongo:/data/db:rw"
],
...
On the same container, I can shell in and see that the volume works fine:
docker exec -it integration_missive-mongo_1 sh
/ # tail /var/log/mongodb/mongodb.log
2017-11-28T22:50:14.452+0000 D STORAGE [initandlisten] admin.system.version: clearing plan cache - collection info cache reset
2017-11-28T22:50:14.452+0000 I INDEX [initandlisten] build index on: admin.system.version properties: { v: 2, key: { version: 1 }, name: "incompatible_with_version_32", ns: "admin.system.version" }
2017-11-28T22:50:14.452+0000 I INDEX [initandlisten] building index using bulk method; build may temporarily use up to 500 megabytes of RAM
2017-11-28T22:50:14.452+0000 D INDEX [initandlisten] bulk commit starting for index: incompatible_with_version_32
2017-11-28T22:50:14.452+0000 D INDEX [initandlisten] done building bottom layer, going to commit
2017-11-28T22:50:14.454+0000 I INDEX [initandlisten] build index done. scanned 0 total records. 0 secs
2017-11-28T22:50:14.455+0000 I COMMAND [initandlisten] setting featureCompatibilityVersion to 3.4
2017-11-28T22:50:14.455+0000 I NETWORK [thread1] waiting for connections on port 27017
2017-11-28T22:50:14.455+0000 D COMMAND [PeriodicTaskRunner] BackgroundJob starting: PeriodicTaskRunner
2017-11-28T22:50:14.455+0000 D COMMAND [ClientCursorMonitor] BackgroundJob starting: ClientCursorMonitor
OK, now for the remote. I kick off a build, it fails because Mongo won't start, so I use the SSH facility that keeps a box alive after a failed build.
I first hack the DC file so that it does not try to launch Mongo, as it will fail. I just get it to sleep instead:
missive-mongo:
image: missive-mongo
command: sleep 1000
volumes:
- ${MONGO_LOCAL}:/data/db
- ${LOGS_LOCAL_PATH}/mongo:/var/log/mongodb
networks:
- storage_network
I then run the docker-compose up script to bring all containers up, and then examine the problematic box: docker inspect integration_missive-mongo_1:
"HostConfig": {
"Binds": [
"/tmp/missive-volumes/logs/mongo:/var/log/mongodb:rw",
"/tmp/missive-volumes/mongo:/data/db:rw"
],
That looks fine. So on the host I create a dummy log file, and list it to prove it is there:
bash-4.3# ls /tmp/missive-volumes/logs/mongo
mongodb.log
So I try shelling in, docker exec -it integration_missive-mongo_1 sh again. This time I find that the folder exists, but not the volume contents:
/ # ls /var/log
mongodb
/ # ls /var/log/mongodb/
/ #
This is very odd, because the reliability of volumes in the remote Docker/Compose config has been exemplary up until now.
Theories
My main one at present is that the differing versions of Docker and Docker Compose could have something to do with it. So I will list out what I have:
Local
Host: Linux Mint
Docker version 1.13.1, build 092cba3
docker-compose version 1.8.0, build unknown
Remote
Host: I suspect it is Alpine (it uses apk for installing)
I am using the docker:17.05.0-ce-git image supplied by CircleCI, the version shows as Docker version 17.05.0-ce, build 89658be
Docker Composer is installed via Pip, and getting the version produces docker-compose version 1.13.0, build 1719ceb.
So, there is some version discrepancy. As a shot in the dark, I could try bumping up Docker/Compose, though I am wary of breaking other things.
What would be ideal though, is some sort of advanced Docker commands I can use to debug why the volume appears to be registered but is not exposed inside the container. Any ideas?
CircleCI runs docker-compose remotely from the Docker daemon so local bind mounts don't work.
A named volume will default to the local driver and would work in CircleCI's Compose setup, the volume will exist where ever the container runs.
Logging should generally be left to stdout and stderr in a single process per container setup. Then you can make use of a logging driver plugin to ship to a central collector. MongoDB defaults to logging to stdout/stderr when run in the foreground.
Combining the volumes and logging:
version: "2.1"
services:
syslog:
image: deployable/rsyslog
ports:
- '1514:1514/udp'
- '1514:1514/tcp'
mongo:
image: mongo
command: mongod -v
volumes:
- 'mongo_data:/data/db'
depends_on:
- syslog
logging:
options:
tag: '{{.FullID}} {{.Name}}'
syslog-address: "tcp://10.8.8.8:1514"
driver: syslog
volumes:
mongo_data:
This is a little bit of a hack as the logging endpoint would normally be external, rather than a container in the same group. This is why the logging uses the external address and port mapping to access the syslog server. This connection is between the docker daemon and the log server, rather than container to container.
I wanted to add an additional answer to accompany the accepted one. My use-case on CircleCI is to run browser-based integration tests, in order to check that a whole stack is working correctly. A number of the 11 containers in use have volumes defined for various things, such as log output and raw database file storage.
What I had not realised until now was that the volumes in CircleCI's Docker executor do not work, as a result of a technical Docker limitation. As a result of this failure, in each case previously, the files were just written to an empty folder.
In my new case however, this issue was causing Mongo to fail. The reason for that was that I'm using --logappend to prevent Mongo from doing its own log rotation on start-up, and this switch requires the path specified in --logpath to exist. Since it existed on the host, but the volume creation failed, the container could not see the log file.
To fix this, I have modified my Mongo service entry to call a script in the command section:
missive-mongo:
image: missive-mongo
command: sh /root/mongo-logging.sh
And the script looks like this:
#!/bin/sh
#
# The command sets up logging in Mongo. The touch is for the benefit of any
# environment in which the logs do not already exist (e.g. Integration, since
# CircleCI does not support volumes)
touch /var/log/mongodb/mongodb.log \
&& mongod -v --logpath /var/log/mongodb/mongodb.log --logappend
In the two possible use cases, this will act as follows:
In the case of the mount working (dev, live) it will simply touch a file if it exists, and create it if it does not (e.g. a completely new environment),
In the case of the mount not working (CircleCI) it will create the file.
Either way, this is a nice safety feature to prevent Mongo blowing up.