I have started to receive "Unable to create merged directory" error during executing tests in docker container. Command to execute test:
pub run build_runner test --fail-on-severe --delete-conflicting-outputs -- -p chrome
Output:
[SEVERE] Unable to create merged directory for /tmp/build_runner_testFCULZJ/.
[SEVERE] Failed after 1m 14s
There were two changes:
docker image with chrome was created from 2.7 instead 2.5
build_web_compilers dev dependecy version was increased: ^1.0.0 -> ^2.9.0
Not sure what is the reason - docker image not allows to create or build_runner issue.
Reason was test sources was not included(test source folder was excluded in build.yaml):
targets:
$default:
sources:
- lib/**
- web/**
- pubspec.*
Related
In my pipelines yml file, I specify a custom image to use from my AWS ECR repository. When the pipeline runs, the "Build setup" logs suggests that the image was pulled in and used without issue:
Images used:
build : 123456789.dkr.ecr.ca-central-1.amazonaws.com/my-image#sha256:346c49ea675d8a0469ae1ddb0b21155ce35538855e07a4541a0de0d286fe4e80
I had worked through some issues locally relating to having my Cypress E2E test suite run properly in the container. Having fixed those issues, I expected everything to run the same in the pipeline. However, looking at the pipeline logs it seems that it was being run with an image other than the one I specified (I suspect it's using the Atlassian default image). Here is the source of my suspicion:
STDERR: /opt/atlassian/pipelines/agent/build/packages/server/node_modules/.cache/mongodb-memory-server/mongodb-binaries/4.0.14/mongod: /usr/lib/x86_64-linux-gnu/libcurl.so.4: version `CURL_OPENSSL_3' not found (required by /opt/atlassian/pipelines/agent/build/packages/server/node_modules/.cache/mongodb-memory-server/mongodb-binaries/4.0.14/mongod)
I know the working directory of the default Atlassian image is "/opt/atlassian/pipelines/agent/build/". Is there a reason that this image would be used and not the one I specified? Here is my pipelines config:
image:
name: 123456789.dkr.ecr.ca-central-1.amazonaws.com/my-image:1.4
aws:
access-key: $AWS_ACCESS_KEY_ID
secret-key: $AWS_SECRET_ACCESS_KEY
cypress-e2e: &cypress-e2e
name: "Cypress E2E tests"
caches:
- cypress
- nodecustom
- yarn
script:
- yarn pull-dev-secrets
- yarn install
- $(npm bin)/cypress verify || $(npm bin)/cypress install && $(npm bin)/cypress verify
- yarn build:e2e
- MONGOMS_DEBUG=1 yarn start:e2e && yarn workspace e2e e2e:run
artifacts:
- packages/e2e/cypress/screenshots/**
- packages/e2e/cypress/videos/**
pipelines:
custom:
cypress-e2e:
- step:
<<: *cypress-e2e
For anyone who happens to stumble across this, I suspect that the repository is mounted into the pipeline container at "/opt/atlassian/pipelines/agent/build" rather than the working directory specified in the image. I ran a "pwd" which gave "/opt/atlassian/pipelines/agent/build", though I also ran a "cat /etc/os-release" which led me to the conclusion that it was in fact running the image I specified. I'm still not entirely sure why, even testing everything locally in the exact same container, I was getting that error.
For posterity: I was using an in-memory mongo database from this project "https://github.com/nodkz/mongodb-memory-server". It generally works by automatically downloading a mongod executable into your node_modules and using it to spin up a mongo instance. I was running into a similar error locally, which I fixed by upgrading my base image from a Debian 9 to a Debian 10 based image. Again, still not sure why it didn't run the same in the pipeline, I suppose there might be some peculiarities with how containers are run in pipelines that I'm unaware of. Ultimately my solution was installing mongod into the image itself, and forcing mongodb-memory-server to use that executable rather than the one in node_modules.
I have made an application using JHipster, of which I want to create a Docker image using Gradle. I was following this guide but when I run the command ./gradlew bootWar -Pprod buildDocker, it gives an error:
Task 'bootWar' not found in root project 'seodin'. Some candidates
are: 'bootRun'.
I also tried running the command with 'bootRun', but I get the following error in this case:
Execution failed for task ':bootRun'.
Process 'command '/usr/lib/jvm/java-8-oracle/bin/java'' finished with non-zero exit value 1
I am stuck at this point and any help is appreciated. [Note: Java and all other dependencies are installed and the JHipster app is working fine on localhost]
I am running a tensorflow serving container referring to this , all the previous steps are good, but in the last block I met some problems:
git clone --recurse-submodules https://github.com/tensorflow/serving
cd serving/
bazel build -c opt tensorflow_serving/...
root#15bb1c2766e3:/serving# bazel build -c opt tensorflow_serving/...
ERROR:
/root/.cache/bazel/_bazel_root/f8d1071c69ea316497c31e40fe01608c/external/org_tensorflow/third_party/clang_toolchain/cc_configure_clang.bzl:3:1:
file '#bazel_tools//tools/cpp:cc_configure.bzl' does not contain
symbol 'cc_autoconf_impl'. ERROR: error loading package '': Extension
file 'third_party/clang_toolchain/cc_configure_clang.bzl' has errors.
ERROR: error loading package '': Extension file
'third_party/clang_toolchain/cc_configure_clang.bzl' has errors. INFO:
Elapsed time: 0.107s ERROR: Couldn't start the build. Unable to run
tests. And in my container, the bazel version is 0.9.0:
I just came across this error. First please check whether your Bazel version is 0.5.4 by typing the command bazel version.
If the bazel version is 0.5.4 you need to upgrade it to 0.12.0 . For updating you could change Dockerfile.devel BAZEL_VERSION to 0.12.0 and re run all the steps.
Or you could update bazel directly in the docker container by
Download bazel--installer-linux-x86_64.sh from
https://github.com/bazelbuild/bazel/releases location
chmod +x ./bazel--installer-linux-x86_64.sh
./bazel--installer-linux-x86_64.sh
I have already answered this on github and it worked. Please refer to the links https://github.com/tensorflow/serving/issues/851 and https://github.com/tensorflow/serving/issues/854
I have a set of integration tests that rely on a postgres database being available. In order for the tests to be independent, I am using this project to start a postgres docker container before the tests:
#Rule
DockerRule postgresDockerRule = DockerRule
.builder()
.imageName("postgres:9")
.expose(databaseConfig.port().toString(), "5432")
.env("POSTGRES_PASSWORD", databaseConfig.password())
.waitForMessage("PostgreSQL init process complete; ready for start up.", 60)
.keepContainer(false)
.build();
This works fine locally. The rule starts up the container, the tests are run and after the tests, the container is deleted.
However, I am having troubles getting these tests to run on gitlab.com.
The tests always fail with the following exception (this is the end of a longer stacktrace):
Caused by: java.io.IOException: No such file or directory
at jnr.unixsocket.UnixSocketChannel.doConnect(UnixSocketChannel.java:94)
at jnr.unixsocket.UnixSocketChannel.connect(UnixSocketChannel.java:102)
at com.spotify.docker.client.ApacheUnixSocket.connect(ApacheUnixSocket.java:73)
at com.spotify.docker.client.UnixConnectionSocketFactory.connectSocket(UnixConnectionSocketFactory.java:74)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:134)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:71)
at org.glassfish.jersey.apache.connector.ApacheConnector.apply(ApacheConnector.java:435)
... 21 more
The project providing the DockerRule uses the spotify docker client to connect to the remote API of the docker daemon. (That is why it throws an IOException stating "No such file or directory" - it cannot find the socket.)
My .gitlab-ci.yml file looks like this:
stages:
- build
- deploy
build_rest-api:
image: openjdk:8
stage: build
script:
- ./gradlew clean build -Dorg.gradle.parallel=true
artifacts:
when: always
paths:
- 'rest-api/build/distributions/*.zip'
- '*/build/reports/*'
deploy_on_development:
image: governmentpaas/cf-cli
stage: deploy
before_script:
- cf api ...
- cf auth ...
- cf target -o XXX -s development
script:
- cf push ....
only:
- master
What I would like to achieve is:
Integration tests are run locally and during the CI process
Integration tests connect to a real database
No difference between local and CI test configuration
I thought about providing the postgres database as a service during the CI process using the services section of .gitlab-ci.yml. But that would mean that I have to manually start up a postgres database before I can run my integration tests locally. What I liked about the junit rule approach was that I could simply run my integration tests like any other tests by just having docker running in the background.
I would be nice if someone can come up with a solution that allows me to connect to a docker instance during the CI process but I am also happy about ideas on how to change my overall setup of integration testing in order for this to work.
When i run my ui test, it just stopping in instantiating tests...
Uploading file:
local path: /Users/eclo/AndroidStudioProjects/Minicooper4android/Minicooper4android/app/build/outputs/apk/NAMmogujie714-uiTest-debug-androidTest-unaligned.apk
remote path: /data/local/tmp/com.mogujie.test
No apk changes detected. Skipping file upload, force stopping package instead.
DEVICE SHELL COMMAND: am force-stop com.mogujie.test
Running tests
What's wrong?