Detailed Step by Step for Installing Docker on Synology DS120j (aarch64)? - docker

I have scrubbed the internet for detailed instructions on how to install Docker specifically on a Synology DS120j (aarch64, running DSM 7.1.4) and still need some help.
For confirmation I checked,
uname -m
aarch64
I'm seeing that it looks to be possible to in stall Docker on this non-Intel machine but so far the instructions I've read are not as specific (step by step) as I apparently need them to be because something's not quite working.
End use is installing and running Home Assistant on this machine (which requires Docker) as an alternative to Raspberry Pi 4 because they are so hard to find and the DS120j seems to be an economic alternative (I have Homebridge successfully running on it and it's working great).
Though it looks like I was able to (sort of) install Docker, I can not access the Docker GUI / it does not show up in my package center. I'm not sure how I can install Home Assistant without the Docker GUI.
So I'm not sure if what I've done right and wrong along the way but have tried multiple methods (from 4 post on Stackoverflow) to get to this point but I might need to start from scratch, which I'm completely prepared to do!
I have tried steps 1 - 6 from this Stackoverflow post including trying the automatic script (and also a second automatic script from a link that was posted further down the in the post replies).
Can I install Docker on arm8 based Synology Nas
Any detailed step by step instructions or insigts would be greatly appreciated.
I have held off for two weeks posting about this but am doing so now because there seems to be a lot of people buying these cheap DS120j NAS machines to run Homebridge / Home Assistant and other servers in place of Raspberry Pi's.
Thank you!
M
This is one script that I have tried:
#!/bin/sh
#/bin/wget -O - https://raw.githubusercontent.com/wdmomoxx/catdriver/master/install-docker.sh | /bin/sh
/bin/wget https://raw.githubusercontent.com/wdmomoxx/catdriver/master/catdsm-docker.tgz
tar -xvpzf catdsm-docker.tgz -C /
rm catdsm-docker.tgz
PATH=/opt/sbin:/opt/bin:/opt/usr/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
/opt/etc/init.d/S60dockerd
sudo docker run -d --network=host -v "/run/docker.sock:/var/run/docker.sock" portainer/portainer:linux-arm64
echo "猫盘群晖Docker安装完成"
echo "浏览器输入群晖IP:9000进入Docker UI"
also
sudo mkdir -p /volume1/#Docker/lib
sudo mkdir /docker
sudo mount -o bind "/volume1/#Docker/lib" /docker
This tip from comments in the above Stackoverflow post, I do not understand though,
"Then set the data-root in /etc/docker/daemon.json: { "data-root": "/docker" }"
I have also created a docker user group and added my name to the group but I'm not sure if docker is set to associate the two.
However when I currently ssh into the Diskstation and
docker --version"
I do get,
Docker version 20.10.0, build 7287ab3
but I can not seem to see or launch the Docker GUI.
I see there is talk on the next about using Portainer but I'm not sure how to get that running as well.

Related

How to change owner for docker socket in container (macOS, Intel Chip)

I have a fresh install of docker desktop on my machine and I'm attempting to create a dev environment. Using docker.io/library/wordpress:latest
However, I'm having some issues with user permissions. From what I can see the documentation doesn't mention this issue for mac users, but does mention something for ubuntu users (See Doc's). The specific issue is as follows;
// Docker error msg...
chown: invalid group: 'root:docker'
WARNING: Could not change owner for docker socket in container : exit code 1
Docker socket permission set to allow in container docker
// My setup...
macOS BigSir 11.6.5(Intel Chip)
Docker Desktop 4.8.2
VSCode Version: 1.67.1
git version 2.36.1
My Question: How do I resolve this issue? I.e. What steps do I need to take?
Any guidance would be greatly appreciated... 😅
Note: I can see other questions floating around here on stack, but from what I can see they're mostly covering ubuntu users or quite old questions and answers.
Note: Added screenshots to demonstrate what I was doing when the error occurred.
Step 1
Step 2
Step 3 -- Error
Most docker image in public registry is not designed to compatible with Docker Dev Environments (Beta).
According to documentation: https://docs.docker.com/desktop/dev-environments/specify/, specified docker image is required to have docker group and vscode user which already included to the group.
So we need to modify official docker image for your case to be working fine with docker dev environments and visual studio code.
# Dockerfile
FROM docker.io/library/wordpress:latest
# the next two commands are required based on documentation
# https://docs.docker.com/desktop/dev-environments/specify/
RUN useradd -s /bin/bash -m vscode \
&& groupadd docker \
&& usermod -aG docker vscode
USER vscode
Build our new docker image by running docker build, e.g:
docker build -t bickyeric/wordpress:dev-latest -f Dockerfile .
After that, you can update image tag from Step 2 of Question with our new image bickyeric/wordpress:dev-latest

How to execute command from one docker container to another

I'm creating an application that will allow users to upload video files that will then be put through some processing.
I have two containers.
Nginx container that serves the website where users can upload their video files.
Video processing container that has FFmpeg and some other processing stuff installed.
What I want to achieve. I need container 1 to be able to run a bash script on container 2.
One possibility as far as I can see is to make them communicate over HTTP via an API. But then I would need to install a web server in container 2 and write an API which seems a bit overkill.
I just want to execute a bash script.
Any suggestions?
You have a few options, but the first 2 that come time mind are:
In container 1, install the Docker CLI and bind mount
/var/run/docker.sock (you need to specify the bind mount from the
host when you start the container). Then, inside the container, you
should be able to use docker commands against the bind mounted
socket as if you were executing them from the host (you might also
need to chmod the socket inside the container to allow a non-root
user to do this.
You could install SSHD on container 2, and then ssh in from container 1 and run your script. The advantage here is that you don't need to make any changes inside the containers to account for the fact that they are running in Docker and not bare metal. The down side is that you will need to add the SSHD setup to your Dockerfile or the startup scripts.
Most of the other ideas I can think of are just variants of option (2), with SSHD replaced by some other tool.
Also be aware that Docker networking is a little strange (at least on Mac hosts), so you need to make sure that the containers are using the same docker-network and are able to communicate over it.
Warning:
To be completely clear, do not use option 1 outside of a lab or very controlled dev environment. It is taking a secure socket that has full authority over the Docker runtime on the host, and granting unchecked access to it from a container. Doing that makes it trivially easy to break out of the Docker sandbox and compromise the host system. About the only place I would consider it acceptable is as part of a full stack integration test setup that will only be run adhoc by a developer. It's a hack that can be a useful shortcut in some very specific situations but the drawbacks cannot be overstated.
I wrote a python package especially for this use-case.
Flask-Shell2HTTP is a Flask-extension to convert a command line tool into a RESTful API with mere 5 lines of code.
Example Code:
from flask import Flask
from flask_executor import Executor
from flask_shell2http import Shell2HTTP
app = Flask(__name__)
executor = Executor(app)
shell2http = Shell2HTTP(app=app, executor=executor, base_url_prefix="/commands/")
shell2http.register_command(endpoint="saythis", command_name="echo")
shell2http.register_command(endpoint="run", command_name="./myscript")
can be called easily like,
$ curl -X POST -H 'Content-Type: application/json' -d '{"args": ["Hello", "World!"]}' http://localhost:4000/commands/saythis
You can use this to create RESTful micro-services that can execute pre-defined shell commands/scripts with dynamic arguments asynchronously and fetch result.
It supports file upload, callback fn, reactive programming and more. I recommend you to checkout the Examples.
Running a docker command from a container is not straightforward and not really a good idea (in my opinion), because :
You'll need to install docker on the container (and do docker in docker stuff)
You'll need to share the unix socket, which is not a good thing if you have no idea of what you're doing.
So, this leaves us two solutions :
Install ssh on you're container and execute the command through ssh
Share a volume and have a process that watch for something to trigger your batch
It was mentioned here before, but a reasonable, semi-hacky option is to install SSH in both containers and then use ssh to execute commands on the other container:
# install SSH, if you don't have it already
sudo apt install openssh-server
# start the ssh service
sudo service start ssh
# start the daemon
sudo /usr/sbin/sshd -D &
Assuming you don't want to always be root, you can add default user (in this case, 'foobob'):
useradd -m --no-log-init --system --uid 1000 foobob -s /bin/bash -g sudo -G root
#change password
echo 'foobob:foobob' | chpasswd
Do this on both the source and target containers. Now you can execute a command from container_1 to container_2.
# obtain container-id of target container using 'docker ps'
ssh foobob#<container-id> << "EOL"
echo 'hello bob from container 1' > message.txt
EOL
You can automate the password with ssh-agent, or you can use some bit of more hacky with sshpass (install it first using sudo apt install sshpass):
sshpass -p 'foobob' ssh foobob#<container-id>
I believe
docker exec -it <container_name> <command>
should work, even inside the container.
You could also try to mount to docker.sock in the container you try to execute the command from:
docker run -v /var/run/docker.sock:/var/run/docker.sock ...

Docker cannot access host files using -v option

Not 100% sure this is the right place but let's try.
I'm using on my Windows laptop the Docker Quickstart Terminal (docker toolbox) to get access to a Linux env with Google AppEngine, python, mysql...
Well, that seems to work and when I type docker run -i -t appengine /bin/bash I get access to this env.
Now I'd like to have access to some of my local (host) files so I can edit them with my Windows editors but run them into the docker instance.
I've seen a -v option but cannot make it work.
What I do
docker run -v /d/workspace:/home/root/workspace:rw -i -t appengine /bin/bash
But workspace stays empty in the Docker instance...
Any help appreciated
(I've read this before to post: https://github.com/rocker-org/rocker/wiki/Sharing-files-with-host-machine#windows)
You have to enable Shared Drives , you can follow this Blog

Run docker without "sudo" in Fedora 24

Although this post is tempted to be closed for many I should ask what I am doing wrong since I am getting crazy and can't find a solution.
I have installed Docker in Fedora 24 and everything seems to be fine but I can't run docker command without sudo and that's annoying (at least for me).
I am logged as a normal user (non-root) and as soon as I run a command I can see this message:
$ docker ps
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
However if I run only docker I can see a list of possible commands :-\
I've followed this guide and I read also a lot (here is a small list):
http://bsaunder.github.io/2014/12/21/running-docker-without-sudo/
Running Docker as non-root user
How to run docker image as a non-root user?
But certainly I am missing something, can any illuminate me? What I am missing here? I know the problem become user has not permissions to /var/run/docker.sock but what's the fix?
Running docker to get the list of commands doesn't use a connection to the daemon, which is why you can run it as non-root.
Have you added your user to the docker group?
sudo usermod -aG docker <my-user>
If you do that, next time you log in you should be able to use the docker CLI without sudo. But beware that the docker group has root privileges, so this is a convenience but not a security improvement.

Docker: Using MacOS, how can I pull CENTOS behind a proxy?

To begin with, I am behind a corporate proxy. I'm using docker 1.12.0
Using OSX, my .bash_profile looks like this:
export http_proxy='http://server-ip:port/'
export https_proxy='http://server-ip:port/'
export no_proxy='localhost,0.0.0.0,127.0.0.1'
What puzzles me is that I am able to pull the ubuntu image without any problems.
docker pull ubuntu:latest
When I attempt to pull centos I get the following error:
docker pull centos:latest
latest: Pulling from library/centos
8d30e94188e7: Pulling fs layer
dial tcp i/o timeout
I've ready through this post about centos connection issues. I believe I have followed the suggested answers but still no luck.
I am able to pull the image without any problems on my personal machine, so I know it must be something with the proxy. Any suggestions are greatly appreciated!
This is painfully obvious now, and instead of turning to the internet first, I should have simply checked preference options.
In Docker for Mac, v1.12.0 once installed, click on the docker icon in the toolbar (upper right corner next to the clock) and choose "Preferences".
Under the "Advanced" tab, you can enter proxy information.
Thank you BMitch for your time, I appreciate it!
Please pull and save image on your laptop. Transfer the image to the server with no internet connection and use docker load.
Hope this works.
Setting the environment variables in your .bashrc will update the network config for any commands you run as the user. However, Docker is designed as a client/server app, and the image pulls are run from the server (dockerd). Docker has docs on how to configure systemd with a proxy that should solve your issue. In brief, you need to adjust the following:
sudo -s
mkdir /etc/systemd/system/docker.service.d
cat >/etc/systemd/system/docker.service.d/http-proxy.conf <<EOF
[Service]
Environment="HTTP_PROXY=http://server-ip:port/"
EOF
systemctl daemon-reload
systemctl restart docker
exit
If you don't have systemd installed, you should be able to edit /etc/default/docker. The entry you need to add there is export http_proxy="http://server-ip:port/".
Lastly, I'm now seeing that you're on MacOS (the question about CentOS is a red herring since I'm sure you can't pull any image and you're not actually running CentOS). In boot2docker, you have the following procedure:
boot2docker ssh
sudo vi /var/lib/boot2docker/profile
# include your "export HTTP_PROXY=http://server-ip:port/" here
sudo /etc/init.d/docker restart
exit

Resources