I'm trying to create a container from within a playbook that would execute some terraform code, but as soon as the container is created the first command I ask of it apt update gives me the error message on exit /bin/bash: line 1: apt update && apt install -y build-essential && bundle install && terraspace check_setup && terraspace up rds -y: command not found.
The weirdest part, is that it used to work well, but since we updated from ansible 2.10.X to ansible 2.12.8 this error appeared.
Here's a the code use to create the container:
- name: Launch Docker container with Terraspace image
register: container_terraspace_apply_result
community.docker.docker_container:
name: "terraspace-test"
image: a-private-ubuntu-base-terraform-image
working_dir: /opt/terraform
volumes:
- "/home/terraform:/opt/terraform:rw"
- "/home/.aws:/root/.aws:ro"
command_handling: correct
entrypoint: ["/bin/bash"]
command: [
"-l",
"-c",
"'apt update && apt install -y build-essential && bundle install && terraspace check_setup && terraspace up rds -y'"
]
env:
REGION: "eu-west-1"
AWS_REGION: "eu-west-1"
TS_ENV: "staging"
AWS_PROFILE: "terraform-staging"
I found out that if I only do "'apt'" I do get the aptr help output you would get when typing it with no arguments, so the binary path is good, the only problems seems to be the space after it ???
Is there something I'm missing here ? I've been looking around and couldn't find a solution to my issue.
Thank you !
So it turned out it was the command_handling option that needed to be set to compatibility !
Related
I ran the following command, it seems to be stuck, neither error nor success:
docker run --rm ubuntu:22.04 /bin/bash -c "apt-get update && apt-get install -y subversion && svn co https://github.com/GPUOpen-LibrariesAndSDKs/AMF/trunk/amf/public/include --non-interactive amf-headers"
also I tried debug log,still no output:
docker run --rm ubuntu:22.04 /bin/bash -c "apt-get update && apt-get install -y subversion && svn co https://github.com/GPUOpen-LibrariesAndSDKs/AMF/trunk/amf/public/include --non-interactive --config-option servers:global:neon-debug-mask=1073741824 amf-headers"
so I tried ubuntu 20.04 and it at least gives an error:
docker run --rm ubuntu:20.04 /bin/bash -c "apt-get update && apt-get install -yqq subversion && svn co https://github.com/GPUOpen-LibrariesAndSDKs/AMF/trunk/amf/public/include --non-interactive amf-headers"
svn: E170013: Unable to connect to a repository at URL 'https://github.com/GPUOpen-LibrariesAndSDKs/AMF/trunk/amf/public/include'
svn: E230001: Server SSL certificate verification failed: issuer is not trusted
Why does svn have no response on Ubuntu 22.04?
If it is an ssl certificate problem like ubuntu 20.04, it should also give an error message instead of nothing
more detail:
GitHub repositories can be accessed from both Git and Subversion (SVN) clients doc
I do all the test with github action(ubuntu-latest/ubuntu-20.04),Included Software
Docker Compose v1 1.29.2
Docker Compose v2 2.1.1+azure-1
Docker-Buildx 0.7.0
Docker-Moby Client 20.10.11+azure-1
Docker-Moby
Server 20.10.11+azure-1
test workflow:
name: Test
on:
push:
branches: [ main ]
workflow_dispatch:
jobs:
Test:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
version: ["22.04", "21.10", "21.04", "20.04"]
steps:
- name: Test
timeout-minutes: 5
shell: bash
run: |
docker run --rm ubuntu:${{ matrix.version }} /bin/bash -c "apt-get update && apt-get install -yqq subversion && svn co https://github.com/GPUOpen-LibrariesAndSDKs/AMF/trunk/amf/public/include --non-interactive amf-headers"
If you really want konw where I used this command,you can try using https://github.com/shinchiro/mpv-winbuild-cmake to build mpv with docker image ubuntu:22.04.The toolchain use svn to download some files.
This is a bug in current Ubuntu 22.04 development version filed as bug #1959717 - I guess we have to wait until it gets fixed.
I created a custom image with the following Dockerfile:
FROM apache/airflow:2.1.1-python3.8
USER root
RUN apt-get update \
&& apt-get -y install gcc gnupg2 \
&& curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add - \
&& curl https://packages.microsoft.com/config/debian/10/prod.list > /etc/apt/sources.list.d/mssql-release.list
RUN apt-get update \
&& ACCEPT_EULA=Y apt-get -y install msodbcsql17 \
&& ACCEPT_EULA=Y apt-get -y install mssql-tools
RUN echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bashrc \
&& echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bashrc \
&& source ~/.bashrc
RUN apt-get -y install unixodbc-dev \
&& apt-get -y install python-pip \
&& pip install pyodbc
RUN echo -e “AIRFLOW_UID=$(id -u) \nAIRFLOW_GID=0” > .env
USER airflow
The image creates successfully, but when I try to run it, I get this error:
"airflow command error: the following arguments are required: GROUP_OR_COMMAND, see help above."
I have tried supplying a group ID with the --user, but I can't figure it out.
How can I start this custom Airflow Docker image?
Thanks!
First of all this line is wrong:
RUN echo -e “AIRFLOW_UID=$(id -u) \nAIRFLOW_GID=0” > .env
If you are running it with Docker Compose (I presume you took it from https://airflow.apache.org/docs/apache-airflow/stable/start/docker.html), this is something you should run on "Host" machine, not in the image. Remove that line, it has no effect.
Secondly - it really depends what "command" you run. The "GROUP_OR_COMMAND" message you got is the output of "airflow" command. You have not copied the whole output of your command but this is a message you get when you try to run airflow without telling it what to do. When you run the image you will run by default the airflow command which has a number of subcommands that can be executed. So the "see help above" message tells you the very thing you should do - look at the help and see what subcommand you wanted to run (and possibly run it).
docker run -it apache/airflow:2.1.2
usage: airflow [-h] GROUP_OR_COMMAND ...
positional arguments:
GROUP_OR_COMMAND
Groups:
celery Celery components
config View configuration
connections Manage connections
dags Manage DAGs
db Database operations
jobs Manage jobs
kubernetes Tools to help run the KubernetesExecutor
pools Manage pools
providers Display providers
roles Manage roles
tasks Manage tasks
users Manage users
variables Manage variables
Commands:
cheat-sheet Display cheat sheet
info Show information about current Airflow and environment
kerberos Start a kerberos ticket renewer
plugins Dump information about loaded plugins
rotate-fernet-key
Rotate encrypted connection credentials and variables
scheduler Start a scheduler instance
sync-perm Update permissions for existing roles and optionally DAGs
version Show the version
webserver Start a Airflow webserver instance
optional arguments:
-h, --help show this help message and exit
airflow command error: the following arguments are required: GROUP_OR_COMMAND, see help above.
when you extend the official image, it will pass the parametor to "airflow" command which causing this problem. Check this out: https://airflow.apache.org/docs/docker-stack/entrypoint.html#entrypoint-commands
Dockerfile:
# https://github.com/symfony/panther#docker-integration
FROM php:latest
RUN apt-get update && apt-get install -y libzip-dev zlib1g-dev chromium && docker-php-ext-install zip
ENV PANTHER_NO_SANDBOX 1
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
docker-compose.yml:
version: "3"
services:
crawler:
build: .
working_dir: /usr/src
volumes:
- .:/usr/src
command: /bin/sh -c "/usr/local/bin/composer install && php index.php"
composer.json:
{
"require": {
"symfony/panther": "^0.6.0"
}
}
index.php:
<?php
// https://github.com/symfony/panther#basic-usage
require __DIR__.'/vendor/autoload.php'; // Composer's autoloader
$client = \Symfony\Component\Panther\Client::createChromeClient();
$client->request('GET', 'https://api-platform.com'); // Yes, this website is 100% written in JavaScript
$client->clickLink('Support');
// Wait for an element to be rendered
$crawler = $client->waitFor('.support');
echo $crawler->filter('.support')->text();
$client->takeScreenshot('screen.png'); // Yeah, screenshot!
All files are in the same location. I run docker-compose build && docker-compose up and I'm getting following error:
crawler_1 | Fatal error: Uncaught RuntimeException: Could not start chrome (or it crashed) after 30 seconds. in /usr/src/vendor/symfony/panther/src/ProcessManager/WebServerReadinessProbeTrait.php:51
This is similar to https://github.com/symfony/panther/issues/200, however in my case I'm not using panther for tests, but only to scrape, and I really don't know how to fix this. I think my problem might be related to invalid docker / docker-compose files.
I had the same error. My solution was install unzip as it says in the readme:
"Warning: On *nix systems, the unzip command must be installed or you
will encounter an error similar to RuntimeException: sh: 1: exec:
/app/vendor/symfony/panther/src/ProcessManager/../../chromedriver-bin/chromedriver_linux64:
Permission denied (or chromedriver_linux64: not found). The underlying
reason is that PHP's ZipArchive doesn't preserve UNIX executable
permissions."
and finally, reinstall the panther library.
Hello i have simple configuration in my project:
version: 2
jobs:
build:
docker:
- image: circleci/node:7
steps:
- checkout
- run:
name: install-dependencies
command: npm install
- run:
name: tests
command: npm test
- deploy:
name: digital-ocean
command: ssh -o "StrictHostKeyChecking no" user#hostname "cd ~/profile-store; git pull; npm install; forever start app.js"
The problem is it need multiply command:
cd client
npm start
cd ..
(in second iteration should install packages from server and in the next run unit tests in client)
I tried these syntax:
command: ["cd client", "npm install", "cd .."]
But getting an error. The question is :
How can i write to execute 3 commands in one command instruction?
command: cd client && npm install && cd ..
For enhanced readability, you can use a folded block scalar (folds linebreaks into spaces):
command: >-
cd client &&
npm install &&
cd ..
Note that you do not really need the final cd .. since the shell instance executing the command is not re-used.
Problem
My wercker build exits with Failed step: setup environment - Command exited with exit code: 1 when I'm switching user in my Docker image. I'm running wercker dev from the commandline. The Dockerfile builds fine with Docker itself on the commandline, as well as on Docker Hub. I can run it fine. It's just when I use it for wercker, that the error occurs.
For example in my Dockerfile is the following code:
# Adding user
RUN adduser --disabled-password --gecos '' dockworker && adduser dockworker sudo && echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
RUN mkdir -p /home/dockworker && chown -R dockworker:dockworker /home/dockworker
USER dockworker # Line the build seems to break on
When I comment this line out, it seems to pass. Now the problem with this, for me, is the following: I'd like to switch to another user, since I'm trying to install nvm (for gulp, bower). Generally I don't prefer to install this this as root, therefore I add a user for this.
Workaround?
However, when I do install nvm as root in my Dockerfile (so just removing the user related lines in the codeblock above completely):
ENV NODE_VERSION 0.12.7
ENV NVM_DIR /usr/local/nvm
# NVM
RUN curl https://raw.githubusercontent.com/creationix/nvm/v0.25.4/install.sh | NVM_DIR=/usr/local/nvm bash
#install the specified node version and set it as the default one, install the global npm packages
RUN . /usr/local/nvm/nvm.sh && nvm install $NODE_VERSION && nvm alias default $NODE_VERSION && npm install -g bower && npm install -g gulp
Then it does get past the setup environment stage, but during the steps it errors out that nvm and npm are not found. The step in the wercker.yml:
box:
id: francobolli/docker-ubuntu-14.04-php-5.6
tag: latest
env:
NVM_DIR: /usr/local/nvm
dev:
steps:
- script:
name: gulp styles and javascript
code: |
npm install
bower install --allow-root
gulp --env=production
I don't really understand this. When I run both docker images from the commandline (so with wercker removed from the context completely) I can execute nvm and npm just fine, but when I'm running it through wercker, it seems the .bashrc file is not being executed. When I cat ~/.bashrc during the steps, I can see:
export NVM_DIR="/usr/local/nvm"
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh" # This loads nvm
Workaround!
When I enter this in a step, it will be executed and I can npm install without a problem, so it seems this is never executed through the .bashrc:
...
- script:
name: gulp styles and javascript
code: |
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh" # It works when I put it here, but it's also in ~/.bashrc, which doesn't seem to get executed
npm install
...
Note: If I source ~/.bashrc in the wercker step instead, it does not work.
Question
So my question is: What am I doing wrong, for not being able to switch user in the Wercker build and even if I could, would I have the same problem as running nvm with root: nvm and npm CAN be found when a Docker container is instantiated from the commandline, but CAN'T be found when running it with Wercker. What's the best solution?
I'd rather not add commands in the wercker.yml if it can be resolved through proper user configuration or proper nvm configuration. Sorry if I'm missing something very obvious.
This has nothing to do with Docker configuration, but with how Wercker handles Docker boxes. From the documentation:
Using Sudo
The sudo command is no longer supported in wercker v2 and effectively does nothing when used.
And for deployment:
Please note that if you update a project to make use of Docker (Ewok version) and this project has autodeployment, this deploy will most likely fail. We will update our documentation in the future on how to deploy these containers.
However, I did get it to build (and deploy) with the solution (temporary workaround?) as displayed in the original question.