I have a project that relies on datastax php-driver:
https://github.com/datastax/php-driver
I have been using this drive successfully for PHP 5.6 and travis.
I wanted to switch to container build since the owners of the php-drive has managed to do so (also getting the built faster is a bonus).
I ran into the following error:
checking whether the C compiler works... no
configure: error: in `/home/travis/build/company-project/git-repo-project/php-driver/ext':
configure: error: C compiler cannot create executables
configure: error: C compiler cannot create executables
See `config.log' for more details
The command "./install.sh" failed and exited with 77 during .
The .travis.yml file looks like the following:
language: php
sudo: false
addons:
apt:
packages:
- libssl-dev
- g++
- make
- cmake
- clang
cache:
ccache: true
directories:
- ${HOME}/dependencies
php:
- 5.6
env:
global:
# Configure the .phpt tests to be Travis friendly
- REPORT_EXIT_STATUS=1
- TEST_PHP_ARGS="-q -s output.txt -g XFAIL,FAIL,BORK,WARN,LEAK,SKIP -x --show-diff"
- PATH=$HOME/.local/bin:$PATH
# Indicate the cached dependencies directory
- CACHED_DEPENDENCIES_DIRECTORY=${HOME}/dependencies
# Add libuv source build for container based TravisCI
- LIBUV_VERSION=1.8.0
- LIBUV_ROOT_DIR=${CACHED_DEPENDENCIES_DIRECTORY}/libuv/${LIBUV_VERSION}
- APP_ENV=travis
before_install:
# Configure, build, install (or used cached libuv)
- if [ ! -d "${LIBUV_ROOT_DIR}" ]; then
pushd /tmp;
wget -q http://dist.libuv.org/dist/v${LIBUV_VERSION}/libuv-v${LIBUV_VERSION}.tar.gz;
tar xzf libuv-v${LIBUV_VERSION}.tar.gz;
pushd /tmp/libuv-v${LIBUV_VERSION};
sh autogen.sh;
./configure --prefix=${LIBUV_ROOT_DIR};
make -j$(nproc) install;
popd;
popd;
else echo "Using Cached libuv v${LIBUV_VERSION}. Dependency does not need to be re-compiled";
fi
- git clone https://github.com/datastax/php-driver.git
- cd php-driver
- git submodule update --init
- cd ext
- ./install.sh
- cd "$TRAVIS_BUILD_DIR"
- echo "extension=cassandra.so" >> `php --ini | grep "Loaded Configuration" | sed -e "s|.*:\s*||"`
# Install CCM
- pip install --user ccm
install:
- composer update -n
services:
- mysql
- cassandra
script:
- vendor/bin/codecept run
Initially as packages, only the libssl-dev was need, but since it was a C compiler missing, I decided to check the datastax build guide and toss in some extra packages: http://datastax.github.io/cpp-driver/topics/building/
The error remains unchanged. I know that the reason why datastax's driver passes the the travis build is due to the fact the make is compiled in their c language, however I was able to successfully do the entire run through when I used to be in sudo and not container. Any help is appreciated.
Related
Issue
I am trying to install some drivers on a Docker image. During the procedure, one of the installation asks me for some input from command line. This obviously freezes the procedure. How can I solve this?
How to reproduce the issue
Start a shell in the base image: docker run -it nvcr.io/nvidia/deepstream:6.0-triton
Run the following commands:
export DEBIAN_FRONTEND noninteractive
# Set some variables to download the proper header modules
export VERSION="2.83.18%2Brev1.dev"
export BALENA_MACHINE_NAME="genericx86-64-ext"
# Set variables for the Yocto version of the OS
export YOCTO_VERSION=5.10.43
export YOCTO_KERNEL=${YOCTO_VERSION}-yocto-standard
# Set variables to download proper NVIDIA driver
export NVIDIA_DRIVER_VERSION=470.86
export NVIDIA_DRIVER=NVIDIA-Linux-x86_64-${NVIDIA_DRIVER_VERSION}
# Install some prereqs
apt install -y git wget unzip build-essential libelf-dev bc libssl-dev bison flex software-properties-common
mkdir -p /usr/src/kernel_source
cd /usr/src/kernel_source
# Causes a pipeline to produce a failure return code if any command errors.
# Normally, pipelines only return a failure if the last command errors.
#SHELL ["/bin/bash", "-o", "pipefail", "-c"]
# Download the kernel source then prepare kernel source to build a module.
curl -fsSL "https://files.balena-cloud.com/images/${BALENA_MACHINE_NAME}/${VERSION}/kernel_source.tar.gz" \
| tar xz --strip-components=2 && \
make -C build modules_prepare -j"$(nproc)"
The last command will have the following output, and then it will freeze waiting for user input:
make: Entering directory '/usr/src/kernel_source/build'
SYNC include/config/auto.conf.cmd
HOSTCC scripts/basic/fixdep
HOSTCC scripts/kconfig/conf.o
LEX scripts/kconfig/lexer.lex.c
HOSTCC scripts/kconfig/expr.o
YACC scripts/kconfig/parser.tab.[ch]
HOSTCC scripts/kconfig/symbol.o
HOSTCC scripts/kconfig/preprocess.o
HOSTCC scripts/kconfig/util.o
HOSTCC scripts/kconfig/confdata.o
HOSTCC scripts/kconfig/lexer.lex.o
HOSTCC scripts/kconfig/parser.tab.o
HOSTLD scripts/kconfig/conf
*
* Restart config...
*
*
* BPF based packet filtering framework (BPFILTER)
*
BPF based packet filtering framework (BPFILTER) (BPFILTER) [Y/n/?] y
bpfilter kernel module with user mode helper (BPFILTER_UMH) [M/n/y/?] (NEW)
Notes
It seems that export DEBIAN_FRONTEND=noninteractive is not doing the job.
Looking at make options there doesn't seem to be an option to skip the question, but I guess I am wrong. How can I fix this?
I am downloading and unzipping binaryen in a run step.
- run: wget -c https://github.com/WebAssembly/binaryen/releases/download/version_101/binaryen-version_101-x86_64-linux.tar.gz -O - | tar -xz -C /tmp/
I am then updating the path in $BASH_ENV.
- run: echo "export PATH=/tmp/binaryen-version_101/bin/wasm-opt:\${PATH}" >> $BASH_ENV
However, I still get a command not found for wasm-opt.
How can I install the downloaded wasm-opt binary such that another run step can use it?
The main issue is that the PATH variable should contain a list of directories. You added the actual binary itself to the path instead of the directory it resides in.
So for example, instead of /tmp/binaryen-version_101/bin/wasm-opt you want /tmp/binaryen-version_101/bin/. Also, after you add a directory to the PATH you won't be able to run those binaries until the next step.
Here's an example config I made:
version: 2.1
workflows:
main:
jobs:
- build
jobs:
build:
docker:
- image: cimg/base:stable
steps:
- checkout
- run: curl -sSL "https://github.com/WebAssembly/binaryen/releases/download/version_101/binaryen-version_101-x86_64-linux.tar.gz" | tar -xz -C /tmp/
- run: echo 'export PATH=/tmp/binaryen-version_101/bin/:${PATH}' >> $BASH_ENV
- run: wasm-opt
I have installed docker rootless on an ubuntu host machine. I have a Dockerfile for building timescaledb with the most important part looking like that:
# Install the tools we need for installation
RUN apt-get update && apt-get -y install gnupg2 lsb-release wget
# Add Postgres and Timescale package repository
RUN echo "deb http://apt.postgresql.org/pub/repos/apt/ $(lsb_release -c -s)-pgdg main" | tee /etc/apt/sources.list.d/pgdg.list
RUN wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add -
RUN sh -c "echo 'deb https://packagecloud.io/timescale/timescaledb/debian/ `lsb_release -c -s` main' > /etc/apt/sources.list.d/timescaledb.list"
RUN wget --quiet -O - https://packagecloud.io/timescale/timescaledb/gpgkey | apt-key add -
# Install Timescale
RUN apt-get update && apt-get -y install timescaledb-2-postgresql-12=2.0.0-zz~debian10
the corresponding docker-compose file looks like this:
timescale:
tty: true
volumes:
- timescale-volume:/var/lib/postgresql/data:rw
build:
context: ./timescale
dockerfile: Dockerfile
command:
- /bin/bash
depends_on:
- cert-mounter
When I run docker-compose up with sudo it works fine, the image is built and the container is running. If I execute it rootless I get the following error:
dpkg: error processing package postgresql-12 (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of timescaledb-2-postgresql-12:
timescaledb-2-postgresql-12 depends on postgresql-12; however:
Package postgresql-12 is not configured yet.
dpkg: error processing package timescaledb-2-postgresql-12 (--configure):
dependency problems - leaving unconfigured
Setting up exim4-daemon-light (4.92-8+deb10u5) ...
debconf: unable to initialize frontend: Dialog
debconf: (TERM is not set, so the dialog frontend is not usable.)
debconf: falling back to frontend: Readline
invoke-rc.d: could not determine current runlevel
invoke-rc.d: policy-rc.d denied execution of start.
Initializing GnuTLS DH parameter file
Setting up libmailutils5:amd64 (1:3.5-4) ...
Setting up mailutils (1:3.5-4) ...
update-alternatives: using /usr/bin/frm.mailutils to provide /usr/bin/frm (frm) in auto mode
update-alternatives: using /usr/bin/from.mailutils to provide /usr/bin/from (from) in auto mode
update-alternatives: using /usr/bin/messages.mailutils to provide /usr/bin/messages (messages) in auto mode
update-alternatives: using /usr/bin/movemail.mailutils to provide /usr/bin/movemail (movemail) in auto mode
update-alternatives: using /usr/bin/readmsg.mailutils to provide /usr/bin/readmsg (readmsg) in auto mode
update-alternatives: using /usr/bin/dotlock.mailutils to provide /usr/bin/dotlock (dotlock) in auto mode
update-alternatives: using /usr/bin/mail.mailutils to provide /usr/bin/mailx (mailx) in auto mode
dpkg: dependency problems prevent configuration of timescaledb-2-loader-postgresql-12:
timescaledb-2-loader-postgresql-12 depends on postgresql-12; however:
Package postgresql-12 is not configured yet.
dpkg: error processing package timescaledb-2-loader-postgresql-12 (--configure):
dependency problems - leaving unconfigured
Processing triggers for libc-bin (2.28-10) ...
Processing triggers for mime-support (3.62) ...
Errors were encountered while processing:
postgresql-common
postgresql-12
timescaledb-2-postgresql-12
timescaledb-2-loader-postgresql-12
E: Sub-process /usr/bin/dpkg returned an error code (1)
The command '/bin/sh -c apt-get update && apt-get -y install timescaledb-2-postgresql-12=2.0.0-zz~debian10' returned a non-zero code: 100
ERROR: Service 'timescale' failed to build
What could be the problem? Other containers are somehow built and run rootless without problems...
So I managed to make it work. In my Dockerfile I also set the uid of a user because I share some volumes and want the UIDs of users be consistent between the containers. So on top of my Dockerfile I had the following:
RUN useradd --uid 80000 postgres
replacing the uid with the lower value solved the issue
RUN useradd --uid 18000 postgres
I am trying to setup bitbucket pipeline for a php based (Laravel-Lumen) app intended to be deployed on nanobox.io. I want this pipeline to deploy my app as soon as code changes are committed.
My bitbucket-pipelines.yml looks like this
image: php:7.1.29
pipelines:
branches:
staging:
- step:
name: Publish to staging version
deployment: staging
caches:
- composer
script:
- apt-get update && apt-get install -y unzip
- curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
- composer install
# - vendor/bin/phpunit
- bash -c "$(curl -fsSL https://s3.amazonaws.com/tools.nanobox.io/bootstrap/ci.sh)"
- nanobox deploy
This gives Following error
+ nanobox deploy
Failed to validate provider - missing docker - exec: "docker": executable file not found in $PATH
Using nanobox with native requires tools that appear to not be available on your system.
docker
View these requirements at docs.nanobox.io/install
I then followed this page and changed second last line to look like this
sudo bash -c "$(curl -fsSL https://s3.amazonaws.com/tools.nanobox.io/bootstrap/ci.sh)"
when done that, I am getting following error
+ sudo bash -c "$(curl -fsSL https://s3.amazonaws.com/tools.nanobox.io/bootstrap/ci.sh)"
bash: sudo: command not found
I ran out of tricks here, also I don't have experience in this area. Any help is very much appreciated.
First, you can't use sudo in pipelines, but that's probably not relevant here. The issue is that nanobox cli wan't to execute docker, which isn't installed. You should enable the docker service for your step.
image: php:7.1.29
pipelines:
branches:
staging:
- step:
name: Publish to staging version
deployment: staging
# Enable docker service
services:
- docker
caches:
- composer
script:
- docker version
You might wan't to have a look at Pipelines docs as well: Run Docker commands in Bitbucket Pipelines
I'm trying to setup codecov as code coverage tool in my repository. I referred to this link to pass reports through docker container -
Link - https://github.com/codecov/support/wiki/Testing-with-Docker
But travis ci fails and gives this error -
docker: Error parsing reference: "..." is not a valid repository/tag.
Here is my travis.yml
sudo: required
dist: trusty
language: node_js
node_js:
- 6
before_install:
- export CHROME_BIN=chromium-browser
- export DISPLAY=:99.0
- sh -e /etc/init.d/xvfb start
- docker run -v "$PWD/shared:/shared" ...
before_script:
- ng build
script:
- ng test --watch=false
- ng lint
- >
docker run -ti -v $(pwd):/app --workdir=/app coala/base coala --version
after_success:
- bash ./deploy.sh
- bash <(curl -s https://codecov.io/bash)
- mv -r coverage/ shared
cache:
bundler: true
directories:
- node_modules
- .coala-cache
services: docker
branches:
only:
- angular
How should I solve this? Thanks!
I assume you refer to Codecov Outside Docker. The current error message already tells you that the three dots ... need to be replaced with a real Docker repository name, e.g. node:6-alpine.
What you're still missing is the part of running the tests (including reports) inside the Docker container, so that you can mv the test reports to the shared folder. You could achieve that by adding a custom Dockerfile based on node, similar to the one below. I chose a more or less full base image including Chrome and other tools to make your use case work:
FROM markadams/chromium-xvfb-js:7
WORKDIR /proj
CMD npm install && \
node_modules/.bin/ng build && \
node_modules/.bin/ng test --watch=false && \
node_modules/.bin/ng lint && \
mkdir -p shared && \
mv coverage.txt shared
That custom image needs to be built and then run like this (assuming the Dockerfile to be in your project root directory):
docker build -t ci-build .
docker run --rm -v "$(pwd):/proj" ci-build
I suggest to change the .travis.yml like follows:
sudo: required
dist: trusty
language: node_js
node_js:
- 6
before_install:
- docker build -t ci-build .
script:
- >
docker run --rm -v $(pwd):/proj ci-build
- >
docker run -ti -v $(pwd):/app --workdir=/app coala/base coala --version
after_success:
- bash ./deploy.sh
- bash <(curl -s https://codecov.io/bash)
cache:
bundler: true
directories:
- node_modules
- .coala-cache
services: docker
branches:
only:
- angular
Another note: the coala/base image works similarly.