I am trying to set up a Discourse forum on OpenShift. When I try to install docker,
$> wget -qO- https://get.docker.io/ | sh
Error: this installer needs the ability to run commands as root.
We are unable to find either "sudo" or "su" available to make this happen.
$> sudo wget -qO- https://get.docker.io/ | sh
bash: usr/bin/sudo: permission denied.
$> su wget -qO- https://get.docker.io/ | sh
bash: /bin/sudo: permission denied.
You do not have root access on OpenShift as a web dev. You also do not need docker nor will it work on the current version of OpenShift.
You should use this instead:
https://github.com/liquidautumn/discourse-quickstart/tree/master/.openshift
Related
What I have:
I am creating a Jenkins(BlueOcean Pipeline) for CI/CD. I am using the docker in docker approach to use Jenkins as described in the Jenkins docs tutorail.
I have tested the setup, it is working fine. I can build and run docker images in the Jenkins container. Now, I am trying to use docker-compose, but it says docker-compose: not found
`
Problem:
Unable to use `docker-compose inside the container(Jenkins).
What I want:
I want to able to use `docker-compose inside the container using the dind(docker in docker) approach.
Any help would be very much appreciated.
Here is my working solution:
FROM maven:3.6-jdk-8
USER root
RUN apt update -y
RUN apt install -y curl
# Install Docker
RUN curl https://get.docker.com/builds/Linux/x86_64/docker-latest.tgz | tar xvz -C /tmp/ && mv /tmp/docker/docker /usr/bin/docker
# Install Docker Compose
RUN curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/bin/docker-compose
# Here your customizations...
It seems docker-compose is not installed in that machine.
You can check if docker-compose is installed or not using docker-compose --version. If it is not installed then you can install it in below way :
Using apt- package manager : sudo apt install -y docker-compose
OR
Using python install manager : sudo pip install docker-compose
I am trying to run a docker as a rootless mode in ubuntu VM.
I am provisioning the VM using the terraform script.
I am using the terraform run module to execute commands and prerequisites.
but when I have added docker as rootless mode commands in this run module those scripts are not executing even though I switch the new user.
Below are the commands I need to execute as another user in the terraform run module.
sudo apt-get install -y uidmap
curl -fsSL https://get.docker.com/rootless | sh
export DOCKER_HOST=unix:///run/user/1000/docker.sock
systemctl --user status docker
I am getting an error as Refusing to install rootless Docker as the root user
My goal is to automate this using a terraform run module.
should any could help to solve this issue or is there any other workaround for this.
Thanks in Advance.
I have tried with switch user and execute but still, these commands are not executing.
sudo apt-get install -y uidmap
curl -fsSL https://get.docker.com/rootless | sh
export DOCKER_HOST=unix:///run/user/1000/docker.sock
systemctl --user status docker
I would like to automate this using a terraform run module or any other way but the only thing is this should be executed once VM is provision.
Read the script https://get.docker.com/rootless where it says "This script should be run with an unprivileged user and install/setup Docker under $HOME/bin/"
If you do still want to install it then you just need to learn how to read shell scripts as there is this part in the script:
# User verification: deny running as root (unless forced?)
if [ "$(id -u)" = "0" ] && [ -z "$FORCE_ROOTLESS_INSTALL" ]; then
>&2 echo "Refusing to install rootless Docker as the root user"; exit 1
fi
I tried installing Impala in a docker container following the instructions at:
https://cwiki.apache.org/confluence/display/IMPALA/Impala+Development+Environment+inside+Docker
to the letter. Yet, I got the error below:
impdev#eefa956ba515:~/Impala/shell$ sh impala-shell
impala-shell: 32: impala-shell: Bad substitution
ls: cannot access '/home/impdev/Impala/shell/ext-py/*.egg': No such file or directory
Traceback (most recent call last):
File "/home/impdev/Impala/shell/impala_shell.py", line 26, in <module>
import prettytable
ImportError: No module named prettytable
impdev#eefa956ba515:~/Impala/shell$
I need this to assemble my Impala dev environment. Any ideas?
The content of my Dockerfile just says:
FROM ubuntu:16.04
After this I run:
docker build -t jcabrerazuniga/impalawiki:v1 .
To run this container I used (as the manual says):
docker run --cap-add SYS_TIME --interactive --tty --name impala-dev-wiki -p 25000:25000 -p 25010:25010 -p 25020:25020 jcabrerazuniga/impalawiki:v1 bash
Now, within the container:
apt-get update
apt-get install sudo
adduser --disabled-password --gecos '' impdev
echo 'impdev ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
su - impdev
Then, as impdev in the container:
sudo apt-get --yes install git
git clone https://git-wip-us.apache.org/repos/asf/impala.git ~/Impala
cd ~/Impala
export IMPALA_HOME=`pwd`
# See https://cwiki.apache.org/confluence/display/IMPALA/Building+Impala
for developing Impala.
$IMPALA_HOME/bin/bootstrap_development.sh
And while the manual says I can start developing I just saw a terminal prompt. From other terminal I run:
docker commit impala-dev-wiki && docker stop impala-dev-wiki
and later I run:
docker start --interactive impala-dev-wiki
and tried to run the impala-shell getting the previous error(s)
Note: It seems the instructions posted at the cwiki page might be outdated. I also tried using Ununtu 14.04 image and I got an error message saying only versions 16.04 and 18.04 are supported. Now I am also trying with 18.04.
when I installed docker initially, it shows to be of version 1.0.1
Being, that the current version is 1.4.1, I found and executed the following instructions:
$ sudo apt-get update
$ sudo apt-get install docker.io
$ sudo ln -sf /usr/bin/docker.io /usr/local/bin/docker
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
$ sudo sh -c "echo deb https://get.docker.io/ubuntu docker main \
> /etc/apt/sources.list.d/docker.list"
$ sudo apt-get update
$ sudo apt-get install lxc-docker
Now, when I run docker version I get 1.4.1, but docker no longer works - it gives me this error:
root#8dedd2fff58e:/# docker version
Client version: 1.4.1
Client API version: 1.16
Go version (client): go1.3.3
Git commit (client): 5bc2ff8
OS/Arch (client): linux/amd64
FATA[0000] Get http:///var/run/docker.sock/v1.16/version: dial unix /var/run/docker.sock: no such file or directory. Are you trying to connect to a TLS-enabled daemon without TLS?
What can I do to fox this, but retail the most current docker verion 1.4.1?
/var/run/docker will be created when you start the docker service:
systemd:
sudo systemctl start docker
upstart:
sudo service docker start
init.d:
sudo /etc/init.d/docker start
You might also need this if you get this error:
FATA[0000] Cannot connect to the Docker daemon. Is 'docker -d' running on this host?
I had the same issue on Mac OS X. Leaving my fix here in case it helps somebody:
Run the "Docker Quick Start Terminal"
In the target-directory, run eval "$(docker-machine env default)"
This fixes the issue for me
I was experiencing the same problem and I was able to find the solution here: https://docs.docker.com/articles/basics/.
It's always good to go back to foundations.
The problem is that you might be running on a different port instead of default socket (unix:///var/run/docker.sock).
If you run "ps aux | grep docker" you should see the daemon running. At the end of the line of the docker process you should also see a parameter -H={IpAddress}:{Port}. You should also see the path were the certificates are stored (--tls parameters)
You have to instruct docker to connect to the tcp address specified in the -H parameter.
For example:
`docker --tls -H tcp://{IpAddress}:{Port} version`
Notice the --tls parameter, this is necessary if you instructed docker to run in a secure mode.
You could avoid the verbosity of the command by setting environment variables.
export DOCKER_HOST="tcp://{IpAddress}:{Port}"
export DOCKER_TLS_VERIFY="1"
Hope this helps..
Is docker initiated as a daemon?
use service docker.io status or service docker status
if not then start it and play with it
On a fresh M1 MacBook I ran into this. Amazingly the solution was to simply log in to the app using my docker account details. Once I did that I re-ran the failed command and it worked.
We have a linux system that we do not have full control of. Basically we cannot modify sudoers file there (it is on a remote, read only file system).
Our "solution" for hudson user to have sudo privileges was to add this user to sudo group in /etc/group file. With this approach I can execute sudo as hudson user once I ssh to the machine. However, when I try to execute sudo from a Hudson job on this system, I get the following error:
+ id
uid=60000(hudson) gid=60000(hudson) groups=60000(hudson),31(sudo)
+ cat passfile
+ sudo -S -v
Sorry, user hudson may not run sudo on sc11136681.
+ cat passfile
+ sudo -S ls /root
hudson is not allowed to run sudo on sc11136681. This incident will be reported.
The above is trying to execute:
cat passfile | sudo -S -v
cat passfile | sudo -S ls /root
Why does it work when I ssh to the machine directly but does not when Hudson uses ssh? Is there a way to make sudo work in Hudson job without adding hudson user to the sudoers file?
Edit: here is output when executing sudo commands after I ssh to the system as hudson user:
[hudson#sc11136681 ~]$ cat passfile | sudo -S -v
[sudo] password for hudson: [hudson#sc11136681 ~]$
[hudson#sc11136681 ~]$
[hudson#sc11136681 ~]$ cat passfile | sudo -S ls /root
anaconda-ks.cfg install.log.syslog jaytest
install.log iscsi_pool_protocol_fields_file subnets
The solution to this problem that worked for us was to install local sudo on the system. Command used:
sudo yum reinstall sudo
Once installed, had to make sure the right sudo was used:
export PATH=/usr/bin:$PATH
The above can be added to slave configuration so it works for all jobs on that slave.