I am trying to test ansible in docker containers where I have assigned one docker container as "ansible-controller" and the rest two as target nodes. I am using a ssh-enabled ubuntu docker image to spawn the docker containers. below is the dockerfile :-
FROM ubuntu:16.04
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:Passw0rd' | chpasswd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]*
I have also installed "sshpass" and "ansible" packages in the controller container but when I am trying to test ansible module "ping" to the target I am facing the following issue :-
target1 | FAILED! => {
"changed": false,
"failed": true,
"module_stderr": "",
"module_stdout": "Traceback (most recent call last):\r\n File \"/root/.ansi
ble/tmp/ansible-tmp-1586534281.47-159197676145489/ping\", line 44, in <module>\r
\n import exceptions\r\nImportError: No module named 'exceptions'\r\n",
"msg": "MODULE FAILURE",
"parsed": false
}
Below is my inventory file :-
target1 ansible_host=172.17.0.3 ansible_ssh_pass=Passw0rd ansible_python_interpreter=/usr/bin/python3.5
I am unable to solve this issue so asking for your assistance. Thanks in advance
My bad..The target machine did not have python installed.
Related
I am running on mac m1, docker, gulp.
my first error was command ld not found, but i fixed it in here.
how to solve running gcc failed exist status 1 in mac m1?
After that it leads me to this error.
this is the full error:
[17:09:04] 'restart-supervisor' errored after 1.04 s
[17:14:45] '<anonymous>' errored after 220 ms
[17:14:45] Error in plugin "gulp-shell"
Message:
Command `supervisorctl restart projectname` failed with exit code 7
[17:14:45] 'restart-supervisor' errored after 838 ms
Ive done a lot of research:
Ive tried doing this, but the command isn't found.
https://github.com/Supervisor/supervisor/issues/121
This as well.
https://github.com/Supervisor/supervisor/issues/1223.
I even change my image to arm64v8/golang:1.17-alpine3.14
this is my gulpfile.js:
var gulp = require("gulp");
var shell = require('gulp-shell');
gulp.task("build-binary", shell.task(
'go build'
));
gulp.task("restart-supervisor", gulp.series("build-binary", shell.task(
'supervisorctl restart projectname'
)))
gulp.task('watch', function() {
gulp.watch([
"*.go",
"*.mod",
"*.sum",
"**/*.go",
"**/*.mod",
"**/*.sum"
],
{interval: 1000, usePolling: true},
gulp.series('build-binary', 'restart-supervisor'
));
});
gulp.task('default', gulp.series('watch'));
This is my current dockerfile:
FROM arm64v8/golang:1.17-alpine3.14
RUN apk update && apk add gcc make git libc-dev binutils-gold
# Install dependencies
RUN apk add --update tzdata \
--no-cache ca-certificates git wget \
nodejs npm \
g++ \
supervisor \
&& update-ca-certificates \
&& npm install -g gulp gulp-shell
COPY ops/api/local/supervisor /etc
ENV PATH $PATH:/go/bin
WORKDIR /go/src/github.com/projectname/src/api
in my docker-compose.yaml i have this:
entrypoint:
[
"sh",
"-c",
"npm install gulp gulp-shell && supervisord -c /etc/supervisord.conf && gulp"
]
vim /etc/supervisord.conf:
#!/bin/sh
[unix_http_server]
file=/tmp/supervisor.sock
username=admin
password=revproxy
[supervisord]
nodaemon=false
user=root
logfile=/dev/null
logfile_maxbytes=0
logfile_backups=0
loglevel=info
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock
username=admin
password=revproxy
[program:projectname_api]
directory=/go/src/github.com/projectname/src/api
command=/go/src/github.com/projectname/src/api/api
autostart=true
autorestart=true
stderr_logfile=/go/src/github.com/projectname/src/api/api_err.log
stderr_logfile_maxbytes=0
stdout_logfile=/go/src/github.com/projectname/src/api/api_debug.log
stdout_logfile_maxbytes=0
startsecs=0
But seriously, what is wrong with this mac m1.
I have tried doing it in rosetta and non-rosetta, version 2.
If the title of my question is wrong please correct me, I also not sure of my error.
I fixed the problem by adding #!/bin/sh and startsecs=0, no errors to be showing but the next problem is the API is not running.
When running a sh script in docker file, i got the following error:
./upload.sh: 5: ./upload.sh: sudo: not found ./upload.sh: 21:
./upload.sh: Bad substitution
sudo chmod 755 upload.sh # line 5
version=$(git rev-parse --short HEAD)
echo "version $version"
echo "Uploading file"
for path in $(find public/files -name "*.txt"); do
echo "path $path"
WORDTOREMOVE="public/"
echo "WORDTOREMOVE $WORDTOREMOVE"
# cause of the error
newpath=${path//$WORDTOREMOVE/} # Line 21
echo "new path $path"
url=http://localhost:3000/${newpath}
...
echo "Uploading file"
...
done
DockerFile
FROM node:10-slim
EXPOSE 3000 4001
WORKDIR /prod/code
...
COPY . .
RUN ./upload.sh
RUN npm run build
CMD ./DockerRun.sh
Any idea?
If anyone faces the same issue, here how I fixed it
chmod +x upload.sh
git update-index --chmod=+x upload.sh (mandatory if you pushed the file to remote branch before changing its permission)
The docker image you are using (node:10-slim) has no sudo installed on it because this docker image runs processes as user root:
docker run -it node:10-slim bash
root#68dcffceb88c:/# id
uid=0(root) gid=0(root) groups=0(root)
root#68dcffceb88c:/# which sudo
root#68dcffceb88c:/#
When your Dockerfile runs RUN ./upload.sh it will run:
sudo chmod 755 upload.sh
Using sudo inside the docker fails because sudo is not installed, there is no need to use sudo inside the docker because all of the commands inside the docker run as user root.
Simply remove the sudo from line number 5.
If you wish to update the running PATH variable run:
PATH=$PATH:/directorytoadd/bin
This will append the directory "/directorytoadd/bin" to the current path.
I want to isolate a testing environment in docker, I did that on CentOS 6 How to let syslog workable in docker?
In CentOS 7, the syslog-ng's configuration is different, when I run
/usr/sbin/syslog-ng -F -p /var/run/syslogd.pid
It appears the following error message, but there is no proc/kmsg in config files.
syslog-ng: Error setting capabilities, capability management disabled; error='Operation not permitted'
Error opening file for reading; filename='/proc/kmsg', error='Operation not permitted (1)'
The Dockerfile
FROM centos
RUN yum update --exclude=systemd -y \
&& yum install -y yum-plugin-ovl \
&& yum install -y epel-release
RUN yum install -y syslog-ng syslog-ng-libdbi
The test process:
docker build -t t1 .
docker run --rm -i -t t1 /bin/bash
In container, run following commands
# check config, no keyword like proc/kmsg
cd /etc/syslog-ng
grep -r -E 'proc|kmsg'
/usr/sbin/syslog-ng -F -p /var/run/syslogd.pid
Change /etc/syslog-ng/syslog-ng.conf from
source s_sys {
system();
internal();
};
to
source s_sys {
unix-stream("/dev/log");
internal();
};
It still show error message, but running instead of exit
syslog-ng: Error setting capabilities, capability management disabled; error='Operation not permitted'
To solve this, just run with --no-caps option
/usr/sbin/syslog-ng --no-caps -F -p /var/run/syslogd.pid
I am trying to create a Docker container with a custom D-Bus bus running inside.
I configured my Dockerfile as follow:
FROM ubuntu:16.04
COPY myCustomDbus.conf /etc/dbus-1/
RUN apt-get update && apt-get install -y dbus
RUN dbus-daemon --config-file=/etc/dbus-1/myCustomDbus.conf
After building, the socket is created but it is flagged as "file", not as "socket", and I can not use it as a bus...
-rwxrwxrwx 1 root root 0 Mar 20 07:25 myCustomDbus.sock
If I remove this file and run the dbus-daemon command again in a terminal, the socket is successfully created :
srwxrwxrwx 1 root root 0 Mar 20 07:35 myCustomDbus.sock
I am not sure if it is a D-Bus problem or a docker one.
Instead of using the "RUN" command, you should use the "ENTRYPOINT" one to run a startup script.
The Dockerfile should look like that :
FROM ubuntu:14.04
COPY myCustomDbus.conf /etc/dbus-1/
COPY run.sh /etc/init/
RUN apt-get update && apt-get install -y dbus
ENTRYPOINT ["/etc/init/run.sh"]
And run.sh :
#!/bin/bash
dbus-daemon --config-file=/etc/dbus-1/myCustomDbus.conf --print-address
You should use a startup script. The "run" command is executed only when the container is created and then stopped.
my run.sh:
if ! pgrep -x "dbus-daemon" > /dev/null
then
# export DBUS_SESSION_BUS_ADDRESS=$(dbus-daemon --config-file=/usr/share/dbus-1/system.conf --print-address | cut -d, -f1)
# or:
dbus-daemon --config-file=/usr/share/dbus-1/system.conf
# and put in Dockerfile:
# ENV DBUS_SESSION_BUS_ADDRESS="unix:path=/var/run/dbus/system_bus_socket"
else
echo "dbus-daemon already running"
fi
if ! pgrep -x "/usr/lib/upower/upowerd" > /dev/null
then
/usr/lib/upower/upowerd &
else
echo "upowerd already running"
fi
then chrome runs with
--use-gl=swiftshader
without errors
I just upgrade my Nagios server to the latest version (4.0.1) on my Debian 7 system. When i start the daemon, i have the following error:
# /etc/init.d/nagios start
/etc/init.d/nagios: 20: .: Can't open /etc/rc.d/init.d/functions
The /etc/rc.d/init.d/functions did not exist on my Debian system (and also on my Ubuntu 12.04 workstation).
What can i do to solve this issue ?
===
Update:
Just hack the startup script with the following command line:
sudo apt-get install daemon
sudo sed -i 's/^\.\ \/etc\/rc.d\/init.d\/functions$/\.\ \/lib\/lsb\/init-functions/g' /etc/init.d/nagios
sudo sed -i 's/status\ /status_of_proc\ /g' /etc/init.d/nagios
sudo sed -i 's/daemon\ --user=\$user\ \$exec\ -ud\ \$config/daemon\ --user=\$user\ --\ \$exec\ -d\ \$config/g' /etc/init.d/nagios
sudo sed -i 's/\/var\/lock\/subsys\/\$prog/\/var\/lock\/\$prog/g' /etc/init.d/nagios
sudo service nagios start
Works fine on my Debian server.
You can simply write your own init script. Copy /etc/init.d/skeleton to /etc/init.d/nagios and fill in the values in that file:
DESC="Nagios"
NAME=nagios
DAEMON=/usr/local/nagios/bin/$NAME
DAEMON_ARGS="-d /usr/local/nagios/etc/nagios.cfg"
PIDFILE=/usr/local/nagios/var/$NAME.lock
I also commented these lines:
#[ -r /etc/default/$NAME ] && . /etc/default/$NAME
and
#start-stop-daemon --start --quiet --pidfile $PIDFILE --exec $DAEMON --test > /dev/null \
# || return 1
Don't forget to chmod +x /etc/init.d/nagios.
Have fun.
Little add for ubuntu 12.04 [desktop] :
'runuser' program doesn't exist for debianLike, but 'su' instead,
'service' program is not located in /sbin but in /usr/sbin
Then Nicolargo's mods + some of mine :
sudo apt-get install daemon
sudo sed -i 's/^\.\ \/etc\/rc.d\/init.d\/functions$/\.\ \/lib\/lsb\/init-functions/g' /etc/init.d/nagios
sudo sed -i 's/status\ /status_of_proc\ /g' /etc/init.d/nagios
sudo sed -i 's/daemon\ --user=\$user\ \$exec\ -ud\ \$config/daemon\ --user=\$user\ --\ \$exec\ -d\ \$config/g' /etc/init.d/nagios
sudo sed -i 's/\/var\/lock\/subsys\/\$prog/\/var\/lock\/\$prog/g' /etc/init.d/nagios
sudo sed -i 's/\/sbin\/service\ /\/usr\/sbin\/service\ /g' /etc/init.d/nagios
sudo sed -i 's/runuser/su/g' /etc/init.d/nagios
sudo service nagios start
I also removed the '-d 10' option applied on killproc in the stop sequence (around line 94) to avoid error message on 'service nagios stop' call.
$Stopping nagios: Illegal option -d
/sbin/start-stop-daemon: signal value must be numeric or name of signal (KILL, INT, ...)
Try '/sbin/start-stop-daemon --help' for more information.
'joy!
You've probably found a solution, but to answer the question:
One possible solution is installing Nagios 3.x from your package manager and then updating to 4 by compiling it from source. The new init script seems to be messed up, but the older one still works.
Source(german): http://www.monitoring-portal.org/wbb/index.php?page=Thread&threadID=29431&pageNo=2