at the moment I work with ARM64 based Debian Images and docker.
I want to automate the docker daemon on boot so we do not have to start it manually. But the Images do not use the systemd but good old sysVinit.
So I though "quite easy - simple an init script with command "dockerd" (or start-stop-daemon and dockerd as Argument". But no - does not work. The command "dockerd -v" works fine when booting (checked by pipe output to log file). But when execute "dockerd" without an Argument - so simple start daemon - nothing happen - no error no warning nothing is piped to log file.
So my question is - are there any other processes Need to be started or configurations need to be done before this dockerd command can be started?
When boot is finished and i do SSH to device and manually do "dockerd" all works fine.
just for close this question by myself :D
I noticed that in sysVinit system when starting the init-scripts the path variable did not exist (maybe because root starting the processes). #
So in my script i just added the path variable and set path to folder of dockerd and everything worked well! :D
Related
In my docker container (based on SUSE distribution SLES 15) both the C++ executable (with debug enhanced code) and the gdbserver executable are installed.
Before doing anything productive the C++ executable sleeps for 5 seconds, then initializes and processes data from a database. The processing time is long enough to attach it to gdbserver.
The C++ executable is started in the background and its process id is returned to the console.
Immediately afterwards the gdbserver is started and attaches to the same process id.
Problem: The gdbserver complains not being able to connect to the process:
Cannot attach to lwp 59: No such file or directory (2)
Exiting
In another attempt, I have copied the same gdbserver executable to /tmp in the docker container.
Starting this gdbserver gave a different error response:
Cannot attach to process 220: Operation not permitted (1)
Exiting
It has been verified, that in both cases the process is still running. 'ps -e' clearly shows the process id and the process name.
If the process is already finished, a different error message is thrown; this is clear and needs not be explained:
gdbserver: unable to open /proc file '/proc/79/status'
The gdbserver was started once from outside of the container and once from inside.
In both scenarios the gdbserver refused to attach the running process:
$ kubectl exec -it POD_NAME --container debugger -- gdbserver --attach :44444 59
Cannot attach to lwp 59: No such file or directory (2)
Exiting
$ kubectl exec -it POD_NAME -- /bin/bash
bash-4.4$ cd /tmp
bash-4.4$ ./gdbserver 10.0.2.15:44444 --attach 220
Cannot attach to process 220: Operation not permitted (1)
Exiting
Can someone explain what causes gdbserver refusing to attach to the specified process
and give advice how to overcome the mismatch, i.e. where/what do I need to examine for to prepare the right handshake between the C++ executable and the gdbserver?
The basic reason why gdbserver could not attach to the running C++ process is due to
a security enhancement in Ubuntu (versions >= 10.10):
By default, process A cannot trace a running process B unless B is a direct child of A
(or A runs as root).
Direct debugging is still always allowed, e.g. gdb EXE and strace EXE.
The restriction can be loosen by changing the value of /proc/sys/kernel/yama/ptrace_scope from 1 (=default) to 0 (=tracing allowed for all processes). The security setting can be changed with:
echo 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope
All credits for the description of ptrace scope belong to the following post,
see 2nd answer by Eliah Kagan - thank you for the thorough explanation! - here:
https://askubuntu.com/questions/143561/why-wont-strace-gdb-attach-to-a-process-even-though-im-root
I had Docker Toolbox installed on my Windows 7 PC and I wanted to upgrade my Docker installation to the most recent version. To do that, I decided to delete Docker Toolbox from my system and reinstall it. I uninstalled Docker Toolbox, uninstalled VirtualBox, and removed all instances of both in my files (such as files in AppData). After reinstalling Docker Toolbox and launching the Quickstart Terminal, I ran into the following error:
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this vi
rtual machine, run: C:\Program Files\Docker Toolbox\docker-machine.exe env defau
lt
Looks like something went wrong in step 'Setting env'... Press any key to contin
ue...
So it seems like it failed when "setting env". I'm not sure what that means in this context and I wish there was a way to check some extended logs to get more detail. I tried following the Docker documentation pointing the location of daemon logs in AppData, however, I could not find anything relevant. Something I did find was a file called "no-error-report", though it was empty.
I tried uninstalling everything again and reinstalling with the attribute NDIS5 network type checked, I've ran the Quickstart Terminal as admin, and I still ran into the same exact error.
Any suggestions on how I may approach this issue?
I got this same issue.
I fixed this by doing the below procedures.
I changed the below lines in start.sh
STEP="Setting env"
eval "$("${DOCKER_MACHINE}" env --shell=bash --no-proxy "${VM}" | sed -e "s/export/SETX/g" | sed -e "s/=/ /g")" &> /dev/null #for persistent Environment Variables, available in next sessions
eval "$("${DOCKER_MACHINE}" env --shell=bash --no-proxy "${VM}")" #for transient Environment Variables, available in current session
Changed --no-proxy to --http_proxy since I am using http proxy
I want to work on a project, but I need to use docker for running the app, but the docker-compose up command fails with this error:
System error: exec: "./wait_to_start": stat ./wait_to_start:
no such file or directory
The wait_to_start command is an executable python script in the subfolder backend/.
I need to determine why it cannot be executed. Either it's been searched in the wrong path, or there are access right problems, or maybe the wrong python version is used.
Can I debug it with details, or login with SSH and check the files on the virtual machine? I'm too unexperienced with Docker...
You can either set the "workdir" metadata to make sure you are in the right place when you start a container or simply call /backend/wait_to_start instead of ./wait_to_start so you remove the need to be in the proper directory.
Do debug with docker-compose I would do this:
docker-compose run --entrypoint bash <servicename>
That should give you a prompt and let you inspect the file and working directory, so see what's wrong.
I have a server written in Erlang, compiled with Rebar, and I make a release with Relx. Starts nicely with
/root/rel/share3/bin/share3 start
The next step is to start when the server boots.
I have tried different approaches, the last one is using the /etc/init.d/skeleton where I changed the following
NAME=share3
DAEMON=/root/rel/share3/bin/share3
DAEMON_ARGS="$1"
After that, I run update-rc.d, but I have not gotten it too work. (Ubuntu 14.04)
The service runs until the machine reboots, and I need to login and start it again.
For Windows, it is really elegant, since it can create the Windows service.
Ubuntu uses upstart as init system, so you could try something like that:
description "Start my awesome service"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
exec /root/rel/share3/bin/share3
You have to place this script in /etc/init/ directory with '.conf' extension like '/etc/init/share3.coinf'. To start it invoke sudo start share3.
At last, I solved it!
I have told to relx to place the result at /home/mattias/rel. The script from relx is /home/mattias/rel/share3/bin/share3
Replace the row
SCRIPT_DIR="$(dirname "$0")"
by (you need to fix the path /home/mattias/rel)
HOME=/home/mattias
export HOME
SCRIPT_DIR="/home/mattias/rel/share3/bin"
Copy the file to /etc/init.d/share3 using
sudo cp ~/rel/share3/bin/share3 /etc/init.d/
Test that it works using
/etc/init.d/share3 start
and
/etc/init.d/share3 stop
In order to make it start at boot, install sysv-rc-conf
sudo apt-get install sysv-rc-conf
Enable boot at start using
sudo sysv-rc-conf share3 on
and disable
sudo sysv-rc-conf share3 off
Alternatives are welcome.
I'm currently struggling with executing a simple command which I know works when I run it manually when logged in as either root or non-root user:
god -c path/to/app/queue_worker.god
I'm trying to run this when the server starts (I'm running Ubuntu 12.04), and I've investigated adding it to /etc/rc.local just to see if it runs. I know I can add it to /etc/init.d and then use update-rc.d but as far as I understand it's basically the same thing.
My question is how I run this command after everything has booted up as clean as possible without any fuzz.
Im probably missing something in the lifecycle of how everything's initialized, but then I gladly encourage some education! Are there alternative ways or places of putting this command?
Thanks!
You could write a bash script to determine when Apache has started and then set it to run as a cron job at a set interval...
if [ "$(pidof apache)" ]
then
# process was found
else
# process not found
fi
Of course then you'll have a useless cron job running all the time, and you'll have to somehow flip a switch once it's run once so it doesn't run again.. This should give you an idea to start from..