I tried installing Impala in a docker container following the instructions at:
https://cwiki.apache.org/confluence/display/IMPALA/Impala+Development+Environment+inside+Docker
to the letter. Yet, I got the error below:
impdev#eefa956ba515:~/Impala/shell$ sh impala-shell
impala-shell: 32: impala-shell: Bad substitution
ls: cannot access '/home/impdev/Impala/shell/ext-py/*.egg': No such file or directory
Traceback (most recent call last):
File "/home/impdev/Impala/shell/impala_shell.py", line 26, in <module>
import prettytable
ImportError: No module named prettytable
impdev#eefa956ba515:~/Impala/shell$
I need this to assemble my Impala dev environment. Any ideas?
The content of my Dockerfile just says:
FROM ubuntu:16.04
After this I run:
docker build -t jcabrerazuniga/impalawiki:v1 .
To run this container I used (as the manual says):
docker run --cap-add SYS_TIME --interactive --tty --name impala-dev-wiki -p 25000:25000 -p 25010:25010 -p 25020:25020 jcabrerazuniga/impalawiki:v1 bash
Now, within the container:
apt-get update
apt-get install sudo
adduser --disabled-password --gecos '' impdev
echo 'impdev ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
su - impdev
Then, as impdev in the container:
sudo apt-get --yes install git
git clone https://git-wip-us.apache.org/repos/asf/impala.git ~/Impala
cd ~/Impala
export IMPALA_HOME=`pwd`
# See https://cwiki.apache.org/confluence/display/IMPALA/Building+Impala
for developing Impala.
$IMPALA_HOME/bin/bootstrap_development.sh
And while the manual says I can start developing I just saw a terminal prompt. From other terminal I run:
docker commit impala-dev-wiki && docker stop impala-dev-wiki
and later I run:
docker start --interactive impala-dev-wiki
and tried to run the impala-shell getting the previous error(s)
Note: It seems the instructions posted at the cwiki page might be outdated. I also tried using Ununtu 14.04 image and I got an error message saying only versions 16.04 and 18.04 are supported. Now I am also trying with 18.04.
Related
I need to compile gem5 with the environment inside docker. This is not frequent, and once the compilation is done, I don't need the docker environment anymore.
I have a docker image named gerrie/gem5. I want to perform the following process.
Use this image to create a container, mount the local gem5 source code, compile and generate an executable file(Executables are by default in the build directory.), exit the container and delete it. And I want to be able to see the compilation process so that if the code goes wrong, I can fix it.
But I ran into some problems.
docker run -it --rm -v ${HOST_GEM5}:${DOCKER_GEM5} gerrie/gem5 bash -c "scons build/X86/gem5.opt"
When I execute the above command, I will go to the docker terminal. Then the command to compile gem5(scons build/X86/gem5.opt) is not executed. I think it might be because of the -it option. When I remove this option, I don't see any output anymore.
I replaced the command with the following sentence.
docker run -it --rm -v ${HOST_GEM5}:${DOCKER_GEM5} gerrie/gem5 bash -c "echo 'hello'"
But I still don't see any output.
When I went into the docker container and tried to compile it myself, the build directory was generated. I found that outside docker, I can't delete it.
What should I do? Thanks!
dockerfile
FROM matthewfeickert/docker-python3-ubuntu:latest
LABEL maintainer="Yujie YujieCui#pku.edu.cn"
USER root
# get dependencies
RUN set -x; \
sudo apt-get update \
&& DEBIAN_FRONTEND=noninteractive sudo apt-get install -y build-essential git-core m4 zlib1g zlib1g-dev libprotobuf-dev protobuf-compiler libprotoc-dev libgoogle-perftools-dev swig \
&& sudo -H python -m pip install scons==3.0.1 \
&& sudo -H python -m pip install six
RUN apt-get clean
# checkout repo with mercurial
# WORKDIR /usr/local/src
# RUN git clone https://github.com/gem5/gem5.git
# build it
WORKDIR /usr/local/src/gem5
ENTRYPOINT bash
I found that when downloading gem5, it may be because gem5 is too big, and it keeps showing "fatal: unable to access 'https://github.com/gem5/gem5.git/': GnuTLS recv error (-110): The TLS connection was non-properly terminated." mistake
So I commented out the
RUN git clone https://github.com/gem5/gem5.git command
You could make the entrypoint scons itself.
ENTRYPOINT ["scons"]
Or absolute path to the bin. I don't know where it will be installed to, you need to check.
ENTRYPOINT ["/usr/local/bin/scons"]
Then you can run
docker run -it --rm -v ${HOST_GEM5}:${DOCKER_GEM5} gerrie/gem5 build/X86/gem5.opt
If the sole purpose of the image is to invoke scons, it would be kind of idiomatic.
Otherwise, remove the entrypoint. Also note, you don't need to wrap it in bash -c
If you have removed the entrypoint you can run it like this.
docker run -it --rm -v ${HOST_GEM5}:${DOCKER_GEM5} gerrie/gem5 scons build/X86/gem5.opt
Installed a brand new Gitlab CE 13.9.1 on a Ubuntu Server 20.04.2.0.
This is the pipeline
image: node:latest
before_script:
- apt-get update -qq
stages:
- install
install:
stage: install
script:
- npm install --verbose
To run it I configure my Gitlab Runner using the same procedure as in my previous Gitlab CE 12:
I pull last Gitlab runner image:
docker pull gitlab/gitlab-runner:latest
First try:
Start GitLab Runner container mounting on local volume
docker run -d \
--name gitlab-runner \
--restart always \
-v /srv/gitlab-runner/config:/etc/gitlab-runner \
-v /var/run/docker.sock:/var/run/docker.sock \
gitlab/gitlab-runner:latest
And register runner
docker run --rm -t -i \
-v /srv/gitlab-runner/config:/etc/gitlab-runner gitlab/gitlab-runner register
When registering runner, for executor I pick shell
Finally, when I push to Gitlab, on the pipeline, I see this error:
$ apt-get update -qq
E: List directory /var/lib/apt/lists/partial is missing. - Acquire (13: Permission denied)
ERROR: Job failed: exit status 1
Second try:
Start GitLab Runner container mounting on Docker volume
Create volume
docker volume create gitlab-runner-config
Start GitLab Runner container
docker run -d \
--name gitlab-runner \
--restart always \
-v gitlab-runner-config:/etc/gitlab-runner \
-v /var/run/docker.sock:/var/run/docker.sock \
gitlab/gitlab-runner:latest
Register runner (picking shell again as executor)
docker run \
--rm -t -i \
-v gitlab-runner-config:/etc/gitlab-runner gitlab/gitlab-runner register
Same results.
$ apt-get update -qq
E: List directory /var/lib/apt/lists/partial is missing. - Acquire (13: Permission denied)
ERROR: Job failed: exit status 1
Third try:
Granting permissions to gitlab-runner
I ended up reading In gitlab CI the gitlab runner choose wrong executor and https://docs.gitlab.com/runner/executors/shell.html#running-as-unprivileged-user, which states these solutions:
move to docker
grant user gitlab-runner the permissions he needs to run specified commands. gitlab-runner may run apt-get without sudo, also he will need perms for npm install and npm run.
grant sudo nopasswd to user gitlab-runner. Add gitlab-runner ALL=(ALL) NOPASSWD: ALL (or similar) to /etc/sudoers on the machine gitlab-runner is installed and change the lines apt-get update to sudo apt-get update, which will execute them as privileged user (root).
I need to use shell
I already did that with sudo usermod -aG docker gitlab-runner
Tried as well with sudo nano /etc/sudoers, adding gitlab-runner ALL=(ALL) NOPASSWD: ALL, and using sudo apt-get update -qq in the pipeline, which results in bash: line 106: sudo: command not found
I'm pretty lost here now. Any idea will be welcome.
IMHO, using shell executor on a Docker runner with already mounted Docker socket on it is not a good idea. You'd better use docker executor, which will take care of everything and probably is how it's supposed to be run.
Edit
Alternatively, you can use a customized Docker image to allow using the shell executor with root permissions. First, you'll need to create a Dockerfile:
FROM gitlab/gitlab-runner:latest
# Change user to root
USER root
Then, you'll have to build the image (here, I tagged it as custom-gitlab-runner):
$ docker build -t custom-gitlab-runner .
Finally, you'll need to use this image:
docker run -d \
--name gitlab-runner \
--restart always \
-v /srv/gitlab-runner/config:/etc/gitlab-runner \
-v /var/run/docker.sock:/var/run/docker.sock \
custom-gitlab-runner:latest
I had a similar issue trying to use locally installed gitlab-runner on ubuntu with a shell executor (I had other issues using docker executor not being able to communicate between stages)
$ docker build -t myapp .
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/build?buildargs=%7B%7D&cachefrom=%5B%5D&cgroupparent=&cpuperiod=0&cpuquota=0&cpusetcpus=&cpusetmems=&cpushares=0&dockerfile=Dockerfile&labels=%7B%7D&memory=0&memswap=0&networkmode=default&rm=1&shmsize=0&t=myapp&target=&ulimits=null&version=1": dial unix /var/run/docker.sock: connect: permission denied
ERROR: Job failed: exit status 1
I then printed what user was running the docker command within the gitlab-ci.yml file, which was gitlab-runner
...
build:
script:
- echo $USER
- docker build -t myapp .
...
I then added gitlab-runner to the docker group using
sudo usermod -aG docker gitlab-runner
which fixed my issue. No more docker permission errors.
While launching a command on my docker image (run), I get the following error :
C:\Program Files\Docker\Docker\resources\bin\docker.exe: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"-n\": executable file not found in $PATH": unknown.
The image is an image for Jmeter, that I have created myself :
FROM hauptmedia/java:oracle-java8
MAINTAINER maisie
ENV JMETER_VERSION 5.2.1
ENV JMETER_HOME /opt/jmeter
ENV JMETER_DOWNLOAD_URL https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-${JMETER_VERSION}.tgz
RUN apt-get clean
RUN apt-get update
RUN apt-get -y install ca-certificates
RUN mkdir -p ${JMETER_HOME}
RUN cd ${JMETER_HOME}
RUN wget https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-5.2.1.tgz
RUN tar -xvzf apache-jmeter-5.2.1.tgz
RUN rm apache-jmeter-5.2.1.tgz
The command that I am launching is :
#!/bin/bash
export volume_path=$(pwd)
export jmeter_path="/opt/apache-jmeter-5.2.1/bin"
docker run --volume ${volume_path}:${jmeter_path} my/jmeter -n -t ${jmeter_path}/TEST.jmx -l ${jmeter_path}/res.jtl
I really can't find any answer to my problem ...
Thank you in advance for any help.
The general form of the docker run command is
docker run [docker options] <image name> [command]
So you are running an image named amos/jmeter, and the command you are having it run is -n -t .... You're getting the error you are because you've only given a list of options and not an actual command.
The first part of this is to include the actual command in your docker run line:
docker run --rm amos/jmeter \
jmeter -n ...
There's also going to be a problem with how you install the software in the Dockerfile. (You do not need a docker run --volume to supply software that's already in the image.) Each RUN command starts in a new shell in a new environment (in a new container even), so saying e.g. RUN cd ... in its own line doesn't do anything. You need to use Dockerfile directives like WORKDIR and ENV to change the environment. The jmeter command isn't in a standard binary directory so you'll also have a little trouble running it. I might change:
# ...
# Run all APT commands in a single command
# (Layer caching can break an install if the list of packages changes)
RUN apt-get clean \
&& apt-get update \
&& apt-get -y install ca-certificates
# Download and unpack the JMeter tar file
# This is all in a single RUN command, so
# (1) the `cd` at the effect has (temporary) effect, and
# (2) the tar file isn't committed to an image before you `rm` it
RUN cd /opt \
&& wget ${JMETER_DOWNLOAD_URL} \
&& tar xzf apache-jmeter-${JMETER_VERSION}.tgz \
&& rm apache-jmeter-${JMETER_VERSION}.tgz
# Create a symlink to the jmeter process in a normal bin directory
RUN ln -s /opt/apache-jmeter-${JMETER_VERSION}/bin/jmeter /usr/local/bin
# Indicate the default command to run
CMD jmeter
Finally, there will be questions around where to store data files. It's better to store data outside the application directory; in a Docker context it's common enough to use short (if non-standard) directory paths like /data. Remember that any file path in a docker run command refers to a path in the container, but you need a docker run -v bind-mount option (your original --volume is equivalent) to make it visible on the host. That would give you a final command like:
docker run -v "$PWD:/data" atos/jmeter \
jmeter -n -t /data/TEST.jmx -l /data/res.jtl
I'm trying to install composer on docker container. I have a container laravel55 and I'm gonna to install composer insite it.
docker exec laravel55 curl --silent --show-error
https://getcomposer.org/installer | php
#result
Composer (version 1.6.5) successfully installed to: /root/docker-
images/docker-php7-apache2/composer.phar
Use it: php composer.phar
Aftar installation, I'm trying to using composer but it doesn't work:
docker exec -w /var/www/html laravel55 php composer.phar install
#result
Could not open input file: composer.phar
It seems that Composer had not installed!
How can I install composer on a docker container?
Well with your command you're actually installing composer.phar locally on your host, you just execute the curl command inside the container. The part behind the pipe symbol | is not executed in your docker container but on your host. In your second command you switch your working directory to /var/www/html where you apparently expect the composer.phar but not in the first command.
So to make the whole command run in the container, you can try the following:
docker-compose exec -w /var/www/html laravel55 \
sh -c "curl --silent --show-error https://getcomposer.org/installer | php"
You could use official composer image from dockerhub and mount on it a volume from your app container
i.e
docker run -td --name my_app --volume /var/www myprivateregistry/myapp
docker run --rm --interactive --volumes-from my_app --volume /tmp:/tmp --workdir /var/www composer install
I am trying to create a base image using the following command as listed in docker documentation
https://docs.docker.com/articles/baseimages/
1) $ sudo debootstrap raring raring > /dev/null
2) $ sudo tar -C raring -c . | sudo docker import - raring
3) $ sudo docker run raring cat /etc/lsb-release
When I run command #3 I get the error FATA[0013] Error response from daemon: Cannot start container 1774ff3afe4f652bcff980ba5871c50bd1987b159c1f61c5d593d05460e82512: exec: "cat": executable file not found in $PATH
Any ideas what's going wrong?
try sudo docker run -it raring /bin/bash -c cat /etc/lsb-release you need to allocate a terminal
Looks like the raring is no longer available in ubuntu repository so Command #1 itself fails and that is why #3 also fails :). I am now trying with trusty release.
The issue was that I am using Ubuntu trusty release as my base operating system and trying to use Ubuntu raring release while creating a base image. Once I move from raring to trusty it works seamlessly.