How to login local machine and change system parameter when use docker for mac [closed] - docker

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I use docker for mac and did create any other vm after install.
And I use spujadas/elk-docker to run elk.
The log shows vm.max_map_count is too low.
So my question is how to change it on local vm?
docker-machine can not list any local machine.
So i can't ssh to local vm and modify it?

The troubleshooting section mentions:
In particular, the message max virtual memory areas vm.max_map_count [65530] likely too low, increase to at least [262144] means that the host's limits on mmap counts must be set to at least 262144.
Use sysctl vm.max_map_count to view the current value, and see Elasticsearch's documentation on virtual memory for guidance on how to change this value.
sysctl -w vm.max_map_count=262144
Note that the limits must be changed on the host; they cannot be changed from within a container.
You can ssh to the default machine with docker-machine ssh (which must exist if you ever run any container).
See Install Elasticsearch with Docker:
OSX with Docker Toolbox
The vm_max_map_count setting must be set via docker-machine:
docker-machine ssh
sudo sysctl -w vm.max_map_count=262144
If you have docker for Mac and its whyve VM, see this thread:
screen
~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
To exit:
CTRL+A CTRL+\ followed by "y"
You will see a similar recommendation in "Install Elasticsearch with Docker":
The vm_max_map_count setting must be set within the xhyve virtual machine:
$ screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
Log in with root and no password.
Then configure the sysctl setting as you would for Linux:
sysctl -w vm.max_map_count=262144

Related

docker problem on mac terminal: ERROR: Cannot connect to the Docker daemon at unix:///var/run/docker.sock [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 months ago.
Improve this question
I am using docker with k8s from mac terminal. Don't know what I have done, my docker command does not work anymore. For example, docker info and docker run... hang forever. I've tried to fix this problem by uninstalling and installing docker. But after I reinstalled docker, I got this error message:
> $ docker info Client: Context: default Debug Mode: false
>
> Server: ERROR: Cannot connect to the Docker daemon at
> unix:///var/run/docker.sock. Is the docker daemon running? errors
> pretty printing info
For other docker commands, I also received:
ERROR: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
errors pretty printing info
I've read through lots of websites (How to easily install and uninstall docker on MacOs) ("VirtualBox is configured with multiple host-only adapters with the same IP" when starting docker)..., but still unable to fix my problem. A lot of solutions do not seem to work on macos. I've seen many solved by the systemctl command, and I tried to replace that with launchctl and followed the rest of the instructions. (Cannot connect to the Docker daemon at unix:/var/run/docker.sock. Is the docker daemon running?) But none works for me.
Please help solve this problem. Thank you!
I just found out that I have to restart my docker desktop as well. I had no idea that docker desktop and terminal's docker command are related. But now, apparently, they are related. If anyone encounters similar problems, remember to try restart your docker desktop and wait for it to get running!
chown root:docker /var/run/docker.sock
chmod g+w /var/run/docker.sock
Try to give/Modify Permission.

Can't ssh to GitLab ee in a docker container [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I've installed GitLab ee on Docker. I'd like to use authentication via ssh instead password but each time I try to authenticate, connection is closed. SSH Port is 1122->22 so I'm connecting with git#gitlab.example -p 1122. I also enabled the port in ufw, checked if openssh server is running in the container.
Error: Connection closed by HOST port 1122
I was searching long time but I didn't find anything so I'll be glad for any suggestion.
Potential problem with Docker and UFW
Time ago I was wondering how to work with both UFW and Docker together (The GitLab service doesn't seem to be the problem, pretty sure you could have had the same problems with any service at all).
Check out this thread: What is the best practice of docker + ufw under Ubuntu
And also consider this:
To persist the iptables rule install the linux package iptables-persistent according to your server distro, in my case (Debian) is sudo apt install iptables-persistent and the package installation will add the NAT rule to a persistent file which is executed on boot. ~afboteros
Potential problem with Gitlab and Docker
When using Gitlab through Docker, some "heavy port-binded" services like SSH might need you to configure them to the exposed port. Maybe if you set the SSH service to the 1122 as you intended to, and binding it like that on the Dockerfile maybe you could make it work.
Official Gitlab documentation

Why I can not simply ssh to docker container from my windows host? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I try to ssh from my windows host to a docker ubuntu container. I know, I can use docker exec -it <container-name> /bin/bash to launch, however, I want to do a normal "ssh root#192.168.xx.xx" to login because I want to simulate remote computer login and also it works also easily with my pycharm.
However, after I installed "openssh-server", and started it, the login with ssh from my host is still not possible.
:~$ ssh root#192.168.99.105
>>> The authenticity of host '192.168.99.105 (192.168.99.105)' can't be established.
ECDSA key fingerprint is SHA256:********
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.99.105' (ECDSA) to the list of known hosts.
root#192.168.99.105's password: xxx
Permission denied, please try again.
How can I solve this problem? I just want to simply ssh to this container...
To answer the question asked in the title:
Why I can not simply ssh to docker container from my windows host?
Docker is a way to configure Linux kernel settings on a process to limit what the process can see (namespaces) and how many resources that process can use (cgroups). So this question becomes "why can't I ssh into a process" and the answer is typically because that process is not an sshd server. The Ubuntu image for docker is not a virtual machine with all the associated daemons, it doesn't even include the kernel. Instead, it's a minimal filesystem with utilities found in a Ubuntu environment (like apt-get and bash).
On the other hand, the docker exec command does work because it is running a second command in the same isolated environment as the rest of the container. So if bash is installed in the image, then docker exec -it $container_id bash will run an interactive shell with the same namespaces and cgroups as the rest of your container processes.
If you want to ssh into your container, my advice is that you don't. This is similar to a code smell, a sign you are treating containers like a VM, and will have issues with the immutability and ephemeral nature of containers. The goal of working with containers is to have all your changes pushed into version control, build a new image, and deploy that image, for every change to the production environment. This eliminates the risk of state drift where interactive changes were made over time by one person and not known to the person trying to rebuild the environment later.
If you still prefer to ignore the advice, or your application is explicitly an sshd server, then you need to install and configure sshd as your running application inside of the container. There's documentation from Docker on how to do this, and lots of examples on Docker Hub from various individuals if you search on sshd (note that I don't believe any of these are official so I wouldn't recommend any of them).
You likely need to configure sshd on the container to allow root access and/or enable password authentication.
sudo sed -i 's|[#]*PasswordAuthentication no|PasswordAuthentication yes|g' /etc/ssh/sshd_config
echo PermitRootLogin yes | sudo tee -a /etc/ssh/sshd_config
sudo service sshd restart
One or both of these commands may help if you container image is ubuntu/debian based. I personally have never had the need to ssh into a docker container.

Unable to reach host from dockerized CentOS through OpenVPN proxy [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
I am using Windows with CentOS in a docker container. I am already connected to VPN using open-vpn in Windows. But, when I try to reach one host from dockerized CentOS, it says unable to connect.
Is it possible to reach to that host from CentOS?
From what I've gathered from your question, you should definitely follow both the steps mentioned below:
You will be using Bridge Network Driver by default and you might not need be concerned with Step 1. I've mentioned this in case you were experimenting with the default settings.
Use Bridge Network / Host Network drivers for your Docker Container to get access to your host machine's (which is running the docker containers) network. Go through https://docs.docker.com/network/ for more info.
Configure docker proxy. Add your vpn settings in the files mentioned in https://docs.docker.com/network/proxy/ as per your use case.
Short Version:
Add the following to your Dockerfile or their equivalent in case you use docker run to execute your builds. For more info refer to the link in Step 2.
ENV HTTP_PROXY "proxy"
ENV HTTPS_PROXY "proxy"
Maybe SELinux prevents connecting to the VPN. You can check the log messages with following command:
cat /var/log/messages | grep "SELinux is preventing"
If log messages show SELinux is involved in the problem, to ensure that only SELinux is in charge, go to /etc/selinux/config and edit the line SELINUX=enforcing in this way:
SELINUX=permissive
and restart CentOS. This causes SELinux only generate log messages without enforcing the policies. If the problem solves, you should create appropriate SELinux policies regarding the log messages that tell which restrictions prevent connecting to the VPN. Your custom policies must provide required permissions. Then you can revert your change in the above file and restart CentOS to leverage SELinux security along with accessing the VPN.
Also you can check the permissions of *.pem files associated to open-vpn using the following command:
ls -l /home/some-user/.cert/nm-openvpn/
for user 'some-user' or
ls -l /root/.cert/nm-openvpn/
for user 'root'.

Adding CPUs accessible by docker for TensorFlow on Windows 10

I'm using Tensorflow on windows 10 with docker (yes, I know Windows 10 isn't supported yet). It performs ok, but only looks like I am only accessing just one of my cpu cores (I have 8). Tensorflow has the ability to assign ops to different devices, so I'd like to be able to get access to all 8. In VirtualBox when I view the settings it only says there is 1 cpu out of the 8 that is configured for the machine. I tried editing the machine to set it to more, but that lead to all sorts of weirdness.
Does anyone know the right way to either create or restart a docker machine to have 8 CPUs? I'm using the docker quickstart container app.
Cheers!!
First you need to ensure you have enabled Virtualization for your machine. You have to do that in the BIOS of your computer.
The link below has a nice video on how to do that, but there are others as well if you google it:
https://www.youtube.com/watch?v=mFJYpT7L5ag
Then you have to stop the docker machine (i.e. the VirtualBox vm) and change the CPU configuration in VirtualBox.
To list the name of your docker machine (it is usually default) run:
docker-machine ls
Then stop the docker machine:
docker-machine stop <machine name>
Next open VirtualBox UI and change the number of CPUs:
Select the docker virtual machine (should be marked as Powered off)
Click Settings->Systems->Processors
Change the number of CPUs
Click OK to save your changes
Restart the docker machine:
docker-machine start <machine name>
Finally you can use the CPU constraint options available for docker run command to restrict CPU usage for your containers if desired.
For example the following command restrict container to use only 3 CPUs:
docker run -ti --cpuset-cpus="0-2" ubuntu:14.04 /bin/bash
More details available in the docker run reference document here.
I just create the machine with all cpus
docker-machine create -d virtualbox --virtualbox-cpu-count=-1 dev
-1 means use all available cpus.

Resources