In Jupyter docker , cannot connect to kernel - docker

When installing Jupyter docker, for example this one :
docker run -d \
--hostname jupyterhub-ds \
--log-opt max-size=50m \
-p 8000:8000 \
-p 5006:5006 \
-e DOCKER_USER=$(id -un) \
-e DOCKER_USER_ID=$(id -u) \
-e DOCKER_PASSWORD=$(id -un) \
-e DOCKER_GROUP_ID=$(id -g) \
-e DOCKER_ADMIN_USER=$(id -un) \
-v "$(pwd)":/workdir \
-v "$(dirname $HOME)":/home_host \
dclong/jupyterhub-ds /scripts/sys/init.sh
JupyterLab starts well and I enter the lab. through URL+port.
However, this is not possible to connect to the inernal python kernel (connection is hanging up).
What kind of security I am facing ?
Is this related to socket communication security ?
After Investigation, I have those messages :
[D 16:01:39.488 NotebookApp] Starting kernel: ['/usr/local/bin/python', '-m', 'ipykernel_launcher', '-f', '/root/.local/share/jupyter/runtime/kernel-f0420fbf-12e918f-20df7d3e804a.json']
[D 16:01:39.491 NotebookApp] Connecting to: tcp://127.0.0.1:51775
[D 16:01:39.491 NotebookApp] Connecting to: tcp://127.0.0.1:38609
[I 16:01:39.492 NotebookApp] Kernel started: f0420fbf-12ef-403e-918f-20df7d3e804a
[D 16:01:39.492 NotebookApp] Kernel args: {'kernel_name': 'python3', 'cwd': '/'}
[D 16:01:39.493 NotebookApp] Clearing buffer for 5e93046f-aa3e-4edd-a018-66b9d4c752e5
[I 16:01:39.493 NotebookApp] Kernel shutdown: 5e93046f-aa3e-4edd-a018-66b9d4c752e5
It seems linked to this one :
https://jupyter-notebook.readthedocs.io/en/stable/public_server.html
Firewall Setup
To function correctly, the firewall on the computer running the jupyter notebook server must be configured to allow connections from client machines
on the access port c.NotebookApp.port set in jupyter_notebook_config.py to allow connections to the web interface.
The firewall must also allow connections from 127.0.0.1 (localhost) on ports from 49152 to 65535. These ports are used by the server to communicate with the notebook kernels.
The kernel communication ports are chosen randomly by ZeroMQ,
and may require multiple connections per kernel, so a large range of ports must be accessible.

I'm not sure how you built the docker command, or why you chose that particular Docker image dclong/jupyterhub?
If it is designed to run jupyterhub (multiuser) then it doesn't sound like what you need if you're trying to run your own Jupyter server in docker, just for you.
I would suggest using something like jupyter/scipy-notebook instead that is designed just to run one Jupyter server.
Otherwise, please describe what you actually want to get running, or why you believe you need to use that image etc.

Related

Hyperledger sawtooth with docker (Test network tutorial). Connectivity problem between the nodes of the network

I am trying to settup a sawtooth network like in the following tutorial.
I use the following docker-compose.yaml file as instructed in the tutorial to create a sawtooth network of 5 nodes using the pbft consesus engine.
The problem is that once I try to check whether peering has occurred on the network by submit a peers query to the REST API on the first node from the shell container I get a connection refused answer:
curl: (7) Failed to connect to sawtooth-rest-api-default-0 port 8008: Connection refused
Connectivity among the containers seems to be working fine (I have checked with ping from inside the containers).
I suspect that the problem stems from the following line of the docker-compose.yaml file:
sawtooth-validator -vv \
--endpoint tcp://validator-0:8800 \
--bind component:tcp://eth0:4004 \
--bind consensus:tcp://eth0:5050 \
--bind network:tcp://eth0:8800 \
--scheduler parallel \
--peering static \
--maximum-peer-connectivity 10000
and more specifically the --bind option. I noticed that eth0 is not resolved properly to the IP of the container network, but instead to the loopback:
terminal output for validator 0
Do you believe that this could be the problem or is there something else I might have overlooked?
Thannk you
Looks like the moment I post something here the answer magically reveals itself.
The backslash characters are not interpreted correctly so the --bind option was not taken into account and the default is the loopback.
What I did to fix it is either put the whole command in the same line or use double backslash.

Inconsistent trust status of Jupyter-Notbooks in Docker

I have a docker image containing two .ipynb notebooks to run when starting a container from this image.
Here are the steps from within the DockerFile to copy and trust the notebooks:
USER root
RUN mkdir -p $NOTEBOOK_DIR
COPY /PATH/TO/NOTEBOOK/NB1.ipynb $NOTEBOOK_DIR
COPY /PATH/TO/NOTEBOOK/NB2.ipynb $NOTEBOOK_DIR
RUN chown -R $NB_USER:$NB_GID $NOTEBOOK_DIR
USER $NB_UID
WORKDIR $HOME
RUN jupyter trust $NOTEBOOK_DIR/NB1.ipynb
RUN jupyter trust $NOTEBOOK_DIR/NB2.ipynb
The ENTRYPOINT ["start-notebooks.sh"] runs the following script:
#!/bin/bash
set -e
NB_PASS=$(echo ${SOME_ID} | python3.8 -c 'from notebook.auth import passwd;print(passwd(input()))')
# Run notebooks
jupyter trust $NOTEBOOK_DIR/NB1.ipynb
jupyter trust $NOTEBOOK_DIR/NB2.ipynb
jupyter-notebook --no-browser --ip 0.0.0.0 --port 8888 --NotebookApp.allow_origin='*' \
--NotebookApp.allow_remote_access=True --NotebookApp.quit_button=False --NotebookApp.terminals_enabled=False \
--NotebookApp.trust_xheaders=True --NotebookApp.open_browser=False --NotebookApp.notebook_dir=$NOTEBOOK_DIR \
--NotebookApp.password=${NB_PASS}
When I run and start the container I get the following output:
my_user#my_host:~$ docker run -it --rm -p 8888:8888 --expose 8888 -v /efs/PATH/TO/NOTEBOOK_FILES:/efs/PATH/TO/NOTEBOOK_FILES -e BASE_PATH=/efs/PATH/TO/BASE_PATH -e SOME_ID=fd283b38-3e4a-11eb-a205-7085c2c5e519 notebooks-image:latest
**Notebook already signed: /home/nb_user/notebooks/NB1.ipynb**
/home/nb_user/.local/lib/python3.8/site-packages/nbformat/__init__.py:92: MissingIDFieldWarning: Code cell is missing an id field, this will become a hard error in future nbformat versions. You may want to use `normalize()` on your notebooks before validations (available since nbformat 5.1.4). Previous versions of nbformat are fixing this issue transparently, and will stop doing so in the future.
validate(nb)
**Signing notebook: /home/nb_user/notebooks/NB2.ipynb**
[I 11:40:10.051 NotebookApp] Writing notebook server cookie secret to /home/nb_user/.local/share/jupyter/runtime/notebook_cookie_secret
[I 11:40:10.303 NotebookApp] [jupyter_nbextensions_configurator] enabled 0.6.1
[I 2022-12-11 11:40:10.507 LabApp] JupyterLab extension loaded from /home/nb_user/.local/lib/python3.8/site-packages/jupyterlab
[I 2022-12-11 11:40:10.507 LabApp] JupyterLab application directory is /home/nb_user/.local/share/jupyter/lab
[I 11:40:10.513 NotebookApp] Serving notebooks from local directory: /home/nb_user/notebooks
[I 11:40:10.513 NotebookApp] Jupyter Notebook 6.5.2 is running at:
[I 11:40:10.513 NotebookApp] http://my_host:8888/
[I 11:40:10.513 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[I 11:59:33.290 NotebookApp] 302 GET / (192.168.x.x) 1.300000ms
[W 11:59:33.303 NotebookApp] Clearing invalid/expired login cookie username-my_host-8888
[I 11:59:33.304 NotebookApp] 302 GET /tree? (192.168.x.x) 2.570000ms
[W 11:59:43.120 NotebookApp] Not allowing login redirect to '/tree?'
[I 11:59:43.120 NotebookApp] 302 POST /login?next=%2Ftree%3F (192.168.x.x) 63.300000ms
[I 11:59:43.191 NotebookApp] 302 GET / (192.168.x.x) 1.130000ms
/home/nb_user/.local/lib/python3.8/site-packages/nbformat/__init__.py:92: MissingIDFieldWarning: Code cell is missing an id field, this will become a hard error in future nbformat versions. You may want to use `normalize()` on your notebooks before validations (available since nbformat 5.1.4). Previous versions of nbformat are fixing this issue transparently, and will stop doing so in the future.
validate(nb)
[W 11:59:47.222 NotebookApp] **Notebook NB2.ipynb is not trusted**
When I open NB1 in Jupyter Notebook GUI it is already trusted and I can start working immediately.
But when I open NB2 within the Jupyter Notebook GUI, it automatically pops up:
I'm aware of this answer for the question Jupyter notebook not trusted. It states:
This can also happen when you create a notebook in a docker container with mounted volume (the file is owned by the root user) and then open in in jupyter running on the host machine. Changing file owner to the host user helps.
My Questions are:
Assuming NB1.ipynb and NB2.ipynb both have the same ownership and permissions, are the rest of my steps OK?
If so, why is NB1 trusted and NB2 is not?
I was able to fix this issue after investigating this error from my contianer output above:
MissingIDFieldWarning: Code cell is missing an id field
I found this question, which is the opposite of my error message above.
I compared the nbformat_minor value in both NB1 and NB2 notebook metadata.
In NB1 is was "nbformat_minor": 4 whereas in NB2 it was "nbformat_minor": 5.
as suggested in the answer, I changed within my notebook the nbformat_minor from 5 to 4.
In short I opened my notebook in a text editor and changed in the end of the notebook to:
{
"nbformat": 4,
"nbformat_minor": 4
}
This fixed my issue with both notebooks trusted when running the container.

Error accessing Scylladb cluster outside docker container

I'm running Scylladb locally in a docker container and I want to access the cluster outside the docker container. That's when I'm getting the following error: cassandra.cluster.NoHostAvailable: ('Unable to connect to any servers')
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 172.17.0.2 776 KB 256 ? ad698c75-a465-4deb-a92c-0b667e82a84f rack1
Note: Non-system keyspaces don't have the same replication settings, effective ownership information is meaningless
Cluster Information:
Name: Test Cluster
Snitch: org.apache.cassandra.locator.SimpleSnitch
DynamicEndPointSnitch: disabled
Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
Schema versions:
443048b2-c1fe-395e-accd-5ae9b6828464: [172.17.0.2]
I have no problem accessing the cluster using cqlsh on port 9042:
Connected to at 172.17.0.2:9042.
[cqlsh 5.0.1 | Cassandra 3.0.8 | CQL spec 3.3.1 | Native protocol v4]
Now I'm trying to access the cluster from my fastapi app that is outside the docker container.
from cassandra.cluster import Cluster
cluster = Cluster(['172.17.0.2'])
session = cluster.connect('Test Cluster')
And here's the Error that I'm getting:
raise NoHostAvailable("Unable to connect to any servers", errors)
cassandra.cluster.NoHostAvailable: ('Unable to connect to any servers', {'172.17.0.2:9042': OSError(51, "Tried connecting to [('172.17.0.2', 9042)]. Last error: Network is unreachable")})
with a little bit of tinkering, it's possible to achieve a connection to the Scylla running in a container outside of the container for local development.
I've tried on M1 Mac with docker desktop:
Run scylla container with couple of new parameters[src]:
--listen-address 0.0.0.0 for simplification as we are spawning Scylla inside the container to allow connection to the container from any network
--broadcast-rpc-address 127.0.0.1 required if --listen-address set to 0.0.0.0. We are going to port forward 9042 from container to host (local) machine, so this is an IP where it will be acessible.
The final command to spawn the container is:
$ docker run --rm -ti \
-p 127.0.0.1:9042:9042 \
scylladb/scylla \
--smp 1 \
--listen-address 0.0.0.0 \
--broadcast-rpc-address 127.0.0.1
The -p 127.0.0.1:9042:9042 is to make port 9042 accessible on host (local) machine.
Install pip3 install scylla-driver as it has support of darwin/arm64 architecture.
Write a simple python script:
# so74265199.py
from cassandra.cluster import Cluster
cluster = Cluster(['127.0.0.1'])
session = cluster.connect()
# Select from a table that is available without keyspace
res = session.execute('SELECT * FROM system.versions')
print(res.one())
Run your script
$ python3 so74265199.py
Row(key='local', build_id='71178cf6db7021896cd8251751b78b3d9e3afa8d', build_mode='release', version='5.0.5-0.20221009.5a97a1060')
Disclaimer: I'm not an expert in Scylla's configuration, so feel free to point out a better approach.

Docker Container Refuses to NOT use Proxy for Docker Network

I'm having issues trying to get networking to work correctly in my container inside a corp domain/behind a proxy.
I've correctly configured (I think) Docker to get around the proxy for downloading images, but now my container is having trouble talking to another container inside the same docker-compose network.
So far, the only resolution is to manually append the docker-compose network to the no_proxy variable in the docker config, but this seems wrong and would need to be configured for each docker-compose network and requires a restart of docker.
Here is how I configured the docker proxy settings on host:
cat << "EOF" >docker_proxy_setup.sh
#!/bin/bash
#Proxy
#ActiveProxyVar=127.0.0.1:80
#Domain
corpdom=domain.org
httpproxyvar=http://$ActiveProxyVar/
httpsproxyvar=http://$ActiveProxyVar/
mkdir ~/.docker
cat << EOL >~/.docker/config.json
{
"proxies":
{
"default":
{
"httpProxy": "$httpproxyvar",
"httpsProxy": "$httpsproxyvar",
"noProxy": ".$corpdom,127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16"
}
}
}
EOL
mkdir -p /etc/systemd/system/docker.service.d
cat << EOL >/etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=$httpproxyvar"
Environment="HTTPS_PROXY=$httpsproxyvar"
Environment="NO_PROXY=.$corpdom,127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16"
EOL
systemctl daemon-reload
systemctl restart docker
#systemctl show --property Environment docker
docker run hello-world
EOF
chmod +x docker_proxy_setup.sh
docker_proxy_setup.sh
and basically if I change to this:
#Domain
corpdom=domain.org,icinga_icinga-net
I am able to use curl to test network and it works correctly, but ONLY when using container_name.icinga_icinga-net
Eg:
This fails curl -k -u root:c54854140704eafc https://icinga2-api:5665/v1/objects/hosts
While this succeeds curl -k -u root:c54854140704eafc https://icinga2-api.icinga_icinga-net:5665/v1/objects/hosts
Note that using curl --noproxy seems to have no effect.
Here is some output from container for reference, any ideas what I can do to have containers NOT use proxy for Docker Networks (private IPv4)?
root#icinga2-web:/# ping icinga2-api
PING icinga2-api (172.30.0.5) 56(84) bytes of data.
64 bytes from icinga2-api.icinga_icinga-net (172.30.0.5): icmp_seq=1 ttl=64 time=0.138 ms
64 bytes from icinga2-api.icinga_icinga-net (172.30.0.5): icmp_seq=2 ttl=64 time=0.077 ms
^C
--- icinga2-api ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1025ms
rtt min/avg/max/mdev = 0.077/0.107/0.138/0.030 ms
root#icinga2-web:/# curl --noproxy -k -u root:c54854140704eafc https://172.30.0.5:5665/v1/objects/hosts
curl: (56) Received HTTP code 503 from proxy after CONNECT
root#icinga2-web:/# curl -k -u root:c54854140704eafc https://172.30.0.5:5665/v1/objects/hosts
curl: (56) Received HTTP code 503 from proxy after CONNECT
root#icinga2-web:/# curl -k -u root:c54854140704eafc https://icinga2-api:5665/v1/objects/hosts
curl: (56) Received HTTP code 503 from proxy after CONNECT
root#icinga2-web:/# curl -k -u root:c54854140704eafc https://icinga2-api.icinga_icinga-net:5665/v1/objects/hosts
{"results":[{"attrs":{"__name":"icinga2-api","acknowledgement":0,"acknowledgement_expiry":0,"acknowledgement_last_change":0,"action_url":"","active":true,"address":"127.0.0.1","address6":"::1","check_attempt":1,"check_command":"hostalive","check_interval":60,"check_period":"","check_timeout":null,"command_endpoint":"","display_name":"icinga2-api","downtime_depth":0,"enable_active_checks":true,"enable_event_handler":true,"enable_flapping":false,"enable_notifications":true,"enable_passive_checks":true,"enable_perfdata":true,"event_command":"","executions":null,"flapping":false,"flapping_current":0,"flapping_ignore_states":null,"flapping_last_change":0,"flapping_threshold":0,"flapping_threshold_high":30,"flapping_threshold_low":25,"force_next_check":false,"force_next_notification":false,"groups":["linux-servers"],"ha_mode":0,"handled":false,"icon_image":"","icon_image_alt":"","last_check":1663091644.161905,"last_check_result":{"active":true,"check_source":"icinga2-api","command":["/usr/lib/nagios/plugins/check_ping","-H","127.0.0.1","-c","5000,100%","-w","3000,80%"],"execution_end":1663091644.161787,"execution_start":1663091640.088944,"exit_status":0,"output":"PING OK - Packet loss = 0%, RTA = 0.05 ms","performance_data":["rta=0.055000ms;3000.000000;5000.000000;0.000000","pl=0%;80;100;0"],"previous_hard_state":99,"schedule_end":1663091644.161905,"schedule_start":1663091640.087908,"scheduling_source":"icinga2-api","state":0,"ttl":0,"type":"CheckResult","vars_after":{"attempt":1,"reachable":true,"state":0,"state_type":1},"vars_before":{"attempt":1,"reachable":true,"state":0,"state_type":1}},"last_hard_state":0,"last_hard_state_change":1663028345.921676,"last_reachable":true,"last_state":0,"last_state_change":1663028345.921676,"last_state_down":0,"last_state_type":1,"last_state_unreachable":0,"last_state_up":1663091644.161787,"max_check_attempts":3,"name":"icinga2-api","next_check":1663091703.191943,"next_update":1663091771.339701,"notes":"","notes_url":"","original_attributes":null,"package":"_etc","paused":false,"previous_state_change":1663028345.921676,"problem":false,"retry_interval":30,"severity":0,"source_location":{"first_column":1,"first_line":18,"last_column":20,"last_line":18,"path":"/etc/icinga2/conf.d/hosts.conf"},"state":0,"state_type":1,"templates":["icinga2-api","generic-host"],"type":"Host","vars":{"disks":{"disk":{},"disk /":{"disk_partitions":"/"}},"http_vhosts":{"http":{"http_uri":"/"}},"notification":{"mail":{"groups":["icingaadmins"]}},"os":"Linux"},"version":0,"volatile":false,"zone":""},"joins":{},"meta":{},"name":"icinga2-api","type":"Host"}]}
root#icinga2-web:/#
PS: I'm fairly certain this is not a specific issue to icinga as I've had some random proxy issues w/ other containers. But, I can say I've tested this icinga compose setup outside corp domain and it worked fine 100%.
Partial Resolution!
I would still prefer to use CIDR to have no_proxy work via container name without having to adjust docker-compose/.env but I got it to work.
A few things I did:
Added lowercase to docker service -->:
cat << EOL >/etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=$httpproxyvar"
Environment="HTTPS_PROXY=$httpsproxyvar"
Environment="NO_PROXY=.$corpdom,127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16"
Environment="http_proxy=$httpproxyvar"
Environment="https_proy=$httpsproxyvar"
Environment="no_proxy=.$corpdom,127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16"
EOL
Added no_proxy in caps and lower to docker-compose containers and set in .env
Note: lower and CAPS should be used
environment:
- 'NO_PROXY=${NO_PROXY}'
- 'no_proxy=${NO_PROXY}'
NO_PROXY=.domain.org,127.0.0.0/8,172.16.0.0/12,icinga_icinga-net
I would prefer to append to the existing variable at least, but I tried the following and it made the variable no_proxy = ,icinga_icinga-net
NO_PROXY=$NO_PROXY,icinga_icinga-net
NO_PROXY=${NO_PROXY},icinga_icinga-net
Note: NO_PROXY was set on host via export
I still don't understand why it fails when using:
curl --noproxy -k -u root:c54854140704eafc https://172.30.0.4:5665/v1/objects/hosts
when I have no_proxy 172.16.0.0/12 which should equal 172.16.0.0 – 172.31.255.255 but doesn't work.
Update:
I tried setting no_proxy to the IP explicitly (no CIDR) and that worked, but it still failed w/ just container as host (no .icinga-net).
This is all related to this great post -->
https://about.gitlab.com/blog/2021/01/27/we-need-to-talk-no-proxy/
This is the best I can come up with, happy to reward better answers!
Docker Setup (Global):
#!/bin/bash
#Proxy
ActiveProxyVar=127.0.0.7
#Domain
corpdom=domain.org
#NoProxy
NOT_PROXY=127.0.0.0/8,172.16.0.0/12,192.168.0.0/16,10.0.0.0/8,.$corpdom
httpproxyvar=http://$ActiveProxyVar/
httpsproxyvar=http://$ActiveProxyVar/
mkdir ~/.docker
cat << EOL >~/.docker/config.json
{
"proxies":
{
"default":
{
"httpProxy": "$httpproxyvar",
"httpsProxy": "$httpsproxyvar",
"noProxy": "$NOT_PROXY"
}
}
}
EOL
mkdir -p /etc/systemd/system/docker.service.d
cat << EOL >/etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=$httpproxyvar"
Environment="HTTPS_PROXY=$httpsproxyvar"
Environment="NO_PROXY=$NOT_PROXY"
Environment="http_proxy=$httpproxyvar"
Environment="https_proy=$httpsproxyvar"
Environment="no_proxy=$NOT_PROXY"
EOL
systemctl daemon-reload
systemctl restart docker
#systemctl show --property Environment docker
#docker run hello-world
docker-compose.yaml:
environment:
- 'NO_PROXY=${NO_PROXY}'
- 'no_proxy=${NO_PROXY}'
.env:
--Basically, add docker-compose network then each container name...
NO_PROXY=127.0.0.0/8,172.16.0.0/12,192.168.0.0/16,10.0.0.0/8,.icinga_icinga-net,icinga2-api,icinga2-web,icinga2-db,icinga2-webdb,icinga2-redis,icinga2-directordb,icinga2-icingadb,icinga2-web_director

Docker remote api don't restart after my computer restart

Last week I struggled to make my docker remote api working. As it is running on VM, I have not restart my VM since then. Today I finally restarted my VM and it is not working any more (docker and docker-compose are working normally, but not docker remote api). My docker init file looks like this: /etc/init/docker.conf.
description "Docker daemon"
start on filesystem and started lxc-net
stop on runlevel [!2345]
respawn
script
/usr/bin/docker -H tcp://0.0.0.0:4243 -d
end script
# description "Docker daemon"
# start on (filesystem and net-device-up IFACE!=lo)
# stop on runlevel [!2345]
# limit nofile 524288 1048576
# limit nproc 524288 1048576
respawn
kill timeout 20
.....
.....
Last time I made setting indicated here this
I tried nmap to see if port 4243 is opened.
ubuntu#ubuntu:~$ nmap 0.0.0.0 -p-
Starting Nmap 7.01 ( https://nmap.org ) at 2016-10-12 23:49 CEST
Nmap scan report for 0.0.0.0
Host is up (0.000046s latency).
Not shown: 65531 closed ports
PORT STATE SERVICE
22/tcp open ssh
43978/tcp open unknown
44672/tcp open unknown
60366/tcp open unknown
Nmap done: 1 IP address (1 host up) scanned in 1.11 seconds
as you can see, the port 4232 is not opened.
when I run:
ubuntu#ubuntu:~$ echo -e "GET /images/json HTTP/1.0\r\n" | nc -U
This is nc from the netcat-openbsd package. An alternative nc is available
in the netcat-traditional package.
usage: nc [-46bCDdhjklnrStUuvZz] [-I length] [-i interval] [-O length]
[-P proxy_username] [-p source_port] [-q seconds] [-s source]
[-T toskeyword] [-V rtable] [-w timeout] [-X proxy_protocol]
[-x proxy_address[:port]] [destination] [port]
I run this also:
ubuntu#ubuntu:~$ sudo docker -H=tcp://0.0.0.0:4243 -d
flag provided but not defined: -d
See 'docker --help'.
I restart my computer many times and try a lot of things with no success.
I already have a group named docker and my user is in:
ubuntu#ubuntu:~$ groups $USER
ubuntu : ubuntu adm cdrom sudo dip plugdev lpadmin sambashare docker
Please tel me what is wrong.
Your startup script contains an invalid command:
/usr/bin/docker -H tcp://0.0.0.0:4243 -d
Instead you need something like:
/usr/bin/docker daemon -H tcp://0.0.0.0:4243
As of 1.12, this is now (but docker daemon will still work):
/usr/bin/dockerd -H tcp://0.0.0.0:4243
Please note that this is opening a port that gives remote root access without any password to your docker host.
Anyone that wants to take over your machine can run docker run -v /:/target -H your.ip:4243 busybox /bin/sh to get a root shell with your filesystem mounted at /target. If you'd like to secure your host, follow this guide to setting up TLS certificates.
I finally found www.ivankrizsan.se and it is working find now. Thanks to this guy (or girl) ;).
This settings work for me on ubuntu 16.04. Here is how to do :
Edit this file /lib/systemd/system/docker.service and replace the line ExecStart=/usr/bin/dockerd -H fd:// with
ExecStart=/usr/bin/docker daemon -H fd:// -H tcp://0.0.0.0:4243
Save the file
restart with :sudo service docker restart
Test with : curl http://localhost:4243/version
Result: you should see something like this:
{"Version":"1.11.0","ApiVersion":"1.23","GitCommit":"4dc5990","GoVersion" "go1.5.4","Os":"linux","Arch":"amd64","KernelVersion":"4.4.0-22-generic","BuildTime":"2016-04-13T18:38:59.968579007+00:00"}
Attention :
Remain aware that 0.0.0.0 is not good for security, for more security, you should use 127.0.0.1

Resources