I am a little bit blocked here.
I am using Ubuntu 14 machine with Atom, where I am developing a Drupal-based system. The installation of the system is on a docker container that I run over a VM using Vagrant.
I can operate perfectly with Atom and run the local server to check changes. The problem is that using kint/dump is not enough, so I decided to install xdebug on docker container and php-debbuger on host machine. I also installed "The easiest Xdebug" on Firefox. But, they don't seem to get connected.
I followed this steps so far:
From docker
pecl install xdebug
inserted on php.ini
zend_extension=/usr/local/lib/php/extensions/no-debug-non-zts-20131226/xdebug.so
inserted on xdebug.ini
xdebug.remote_enable=1
xdebug.remote_autostart=0
xdebug.remote_connect_back=1
xdebug.remote_port=9000
xdebug.remote_log=/tmp/php5-xdebug.log
xdebug.remote_handler=dbpg
From host machine, on ../provision/docker-compose.yml, added the following:
environment:
XDEBUG_CONFIG: remote_host={{192.168.33.33}}
At firefox's add-on, set the IDE key as
xdebug.atom
From Atom, on php-debug - settings - Path Maps
/url;/home/myname/www/path/cms/
I am confused with the last one, but I tried different approaches and I am sure there are other settings to do. What am I missing?
My experience with Docker and XDebug is that you have to put your IP address in the Docker network.
Do a docker inspect [your_container_name] | grep -i gateway (when it's running) and use that IP for the remote host configuration in the xdebug.ini file.
If you want to debug further I recommend putting the remote host configuration directly in the xdebug.ini to make sure that the enviroment variable is being passed correctly.
After installing php-debug, put below setting into your Atom config (config.cson):
"*":
"php-debug":
PathMaps: [
"/path/to/app/in/docker;/path/to/app/in/local"
]
ServerPort: 9000
welcome:
showOnStartup: false
To get more information and instructions, you can read this post.
Related
I am trying to install docker engine inside a container.
wget https://desktop.docker.com/linux/main/amd64/docker-desktop-4.16.2-amd64.deb
apt-get install -y ./docker-desktop-4.16.2-amd64.deb
Everything goes fine until in the post install phase, it tries to update /ect/hosts files for the kubernetes. Here it fails:
/var/lib/dpkg/info/docker-desktop.postinst: line 42: /etc/hosts: Read-only file system
This is expected behaviour for docker build in that it does not allow to modify /etc/hosts of the container.
Is there a way to solve this? Install docker desktop without doing this step? Or any other way?
I Solved this issue by adding this parameter in build
--add-host kubernetes.docker.internal:127.0.0.1
Example:
docker build --add-host kubernetes.docker.internal:127.0.0.1 -t stsjava2 .
When the Docker desktop installation fails with an error related to "/etc/hosts", it is usually due to a conflict with the host system's configuration. Here are some steps that you can try to resolve the issue:
Check the permissions of the "/etc/hosts" file on your host system to ensure
that it is accessible to Docker.
Try to start the Docker container with elevated privileges (e.g., using
"sudo") to see if that resolves the issue.
If the above steps do not resolve the issue, you can try modifying the
Docker container's network configuration to use a different network driver
that does not conflict with the host system's "/etc/hosts" file.
You can also try running the Docker container in a different environment
(e.g., a virtual machine) that does not have the same conflicts with the
host system.
If all else fails, you can try reinstalling Docker or using a different version of Docker to see if that resolves the issue.
I am trying to work with rviz by means a remote connection with ssh. When I execute the command rosrun rviz rviz, this error appears:
QStandardPaths: XDG_RUNTIME_DIR not set, defaulting to '/tmp/runtime-root'
qt.qpa.screen: QXcbConnection: Could not connect to display
Could not connect to any X display.
I already added the -X flag during the ssh connection, by ssh myusername#host -X but nothing changes.
I don't know what else to do, so any help would be welcomed.
I am working from a Mac computer (macOS Catalina), remotely I am working on a workstation with Docker, and my image has Ubuntu 18.04 and ROS Melodic.
Thank you in advance.
EDIT:
I just tried to execute rviz locally on the workstation and appears the same error, so I suppose the ssh connection is not the problem. Could the problem be due to the Docker or the workstation (Nvidia DGX Station)? Could it be due to permission issue?
Thank you.
I don't currently know about docker, but does the following work for you:
user#local $ export ROS_MASTER_URI=http://your_remote's_hostname:11311
user#local $ rosrun rviz rviz
And see https://wiki.ros.org/ROS/NetworkSetup for the details + ip configuration on both machines.
Update: Here are some instruction about running GUI apps in docker and MAC, may be useful (in case you haven' seen it already).
I have a docker container with ROS that I use to run rviz and other UI apps (ROS's QT-based apps do not work in KDE Neon).
The docker-compose.yml contains the following:
##############
version: "3.8"
services:
ros:
container_name: ros1
network_mode: host
# I created my own image, with my own user, etc
image: YOUR_IMAGE
volumes:
# you can ignore this line if you want (I'll explain below)
# - /home/ichramm/devel/robots:/home/ichramm/devel/robots
- /etc/localtime:/etc/localtime:ro
- /tmp/.X11-unix:/tmp/.X11-unix:ro
- /home/ichramm/.Xauthority:/home/ichramm/.Xauthority:ro
- /run/user/1000:/run/user/1000:ro
- /run/user/1000/bus:/run/user/1000/bus:ro
command: /entrypoint.sh
environment:
USER: ichramm
DISPLAY: ${DISPLAY}
XDG_RUNTIME_DIR=/tmp/runtime-${USER}
DBUS_SESSION_BUS_ADDRESS: unix:path=/run/user/1000/bus
devices:
#- "/dev/ttyUSB0:/dev/ttyUSB0"
#- "/dev/dri/card0:/dev/dri/card0"
#- "/dev/dri/card1:/dev/dri/card1"
You should try to map the mounted volumes to your system. I understand that you have MAC, which means this may not work for you.
This works for me, but I don't use ssh, I just use two scripts:
1.
❯ cat docker-run.sh
#!/bin/bash
docker exec -ti -w $(pwd) ros1 ./wrapper.sh $#
❯ cat wrapper.sh
#!/bin/bash
export XDG_RUNTIME_DIR=/tmp/runtime-$USER
source env.sh
$#
In order for this to work you need the following:
Mount the working directory in the container (see commented line above)
Have a file env.sh which sources ROS's setup.bash and the workspace devel/setup.bash.
Of course the experience with those scripts its limited, that's why I also enter the docker directly using the following:
❯ cat enter-env.sh
#!/bin/bash
docker exec -ti -w /home/ichramm/devel/robots ros1 /bin/bash
This works only because the container's directory structure matches the host's. (Note that I only mount the development directory anyway). I also added a user with the same name UID and GID as in the host to prevent issues with file permissions.
If you can't make it work, I suggest you turn to a VM. Just install Ubuntu 20.04 without UI (you can disable it later with sudo systemctl set-default multi-user) and use SSH with X forwarding. I worked with that setup switching to docker and still have the VM in case something happens.
Update: Have in mind that I am doing some potentially unsecure things, like mounting .Xauthority. It works for me because no one else has access to my computer.
I installed docker and downloaded an ubuntu distro to run with laravel sail,planing to use swoole php,and made it default,also made wsl version to 2
with docker-compose.yml ready from laravel sail docker-compose.yml:
but every time I try to run the sail up cmd,it gives me this error " Unsupported operating system [MINGW64_NT-10.0]. Laravel Sail supports macOS, Linux, and Windows (WSL2)."
any ideas how to fix this ?
If you are using windows follow these steps.
Make sure ubuntu and WSL 2 is installed, you can follow the instructions here
After successful installation of ubuntu, your file system now shares the same file system with your ubuntu, this means if you want to run ubuntu commands on a folder path you have to open the corresponding terminal. If you are using Visual Studio Code, you can select the corresponding ubuntu terminal for the file directory by selecting ubuntu as the new terminal, it will open the Ubuntu terminal on the current path.
You can run ./vendor/bin/sail up. Make Sure you are running the command from your project folder in Ubuntu. Usually the folder is always in /mnt/c/users/path/to/project.
If you see docker not running. Open your docker desktop, go to setting, under resources, select WSL and enable integration with your ubuntu
Restart your terminal, and run the command again. Don't forget under you Ubuntu file system which is usually in /mnt/c/users/path-to-laravel-project.
you should see the following for the first time. Note the folder path highlighted.
One last thing, in order to access your laravel app on your localhost, you have to access the WSL IP address that connects to Ubuntu. use the following command. wsl hostname -i it outputs an IP address like , Note! your IP address may be different. Navigate to the IP address on default port 80 and you should see.
I wish you a successful project. Enjoy!!
You need to run the sail up command from inside your WSL2 Ubuntu Image not directly from your terminal. Once you do that it should work ok
I use Docker Toolbox on Windows 7 in a corporate environment. My workflow requires pulling containers from one artifactory and pushing them to a different one (eg. external and internal). Each artifactory requires a different proxy to access it. Is there a way to configure Docker daemon to select proxy based on a URL? Or, if not, what else can I do to make this work?
Since, as Pierre B. mentioned, Docker daemon does not support URL-based proxy selection, the solution is to point it to a local proxy configured to select the proper upstream proxy based on the URL.
While any HTTP[S] proxy capable of upstream selection would do, (pac4cli project being particularly interesting for it's advertised capability to select the upstream based on proxy-auto-discovery protocol used by most Web browsers a in corporate setting), I've chosen to use tinyproxy, as more mature and light-weight solution. Furthermore, I've decided to run my proxy inside the docker-machine VM in order to simplify it's deployment and make sure the proxy is always running when the Docker daemon needs it.
Below are the steps I used to set up my system. I'm especially grateful to phoenix for providing steps to set up Docker Toolbox on Windows behind a corporate proxy, and will borrow heavily from that answer.
From this point on I will assume either Docker Quickstart Terminal or GitBash, with docker in the PATH, as your command line console and that "username" is your Windows user name.
Step 1: Build tinyproxy on your target platform
Begin by pulling a clean Linux distribution, I used CentOS, and run bash inside it:
docker run -it --name=centos centos bash
Next, install the tools we'll need:
yum install -y make gcc
After that we pull the latest release of Tinyproxy from it's GitHub repository and extract it inside root's home directory (at the time of this writing the latest release was 1.10.0):
cd
curl -L https://github.com/tinyproxy/tinyproxy/releases/download/1.10.0/tinyproxy-1.10.0.tar.gz \
| tar -xz
cd tinyproxy-1.10.0
Now let's configure and build it:
./configure --enable-upstream \
--disable-filter\
--disable-reverse\
--disable-transparent\
--disable-xtinyproxy
make
While --enable-upstream is obviously required, disabling other default features is optional but a good practice. To make sure it actually works run:
./src/tinyproxy -h
You should see something like:
Usage: tinyproxy [options]
Options are:
-d Do not daemonize (run in foreground).
-c FILE Use an alternate configuration file.
-h Display this usage information.
-v Display version information.
Features compiled in:
Upstream proxy support
For support and bug reporting instructions, please visit
<https://tinyproxy.github.io/>.
We exit the container by pressing Ctrl+D and copy the executable to a special folder location accessible from the docker-machine VM:
docker cp centos://root/tinyproxy-1.10.0/src/tinyproxy \
/c/Users/username/tinyproxy
Substitute "username" with your Windows user name. Please note that double slash — // before "root" is required to disable MINGW path conversion.
Now we can delete the container:
docker rm centos
Step 2: Point docker daemon to a local proxy port
Choose a TCP port number to run the proxy on. This can be any port that is not in use on the docker-machine VM. I will use number 8618 in this example.
First, let's delete the existing default Docker VM:
WARNING: This will permanently erase all currently stored containers and images
docker-machine rm -f default
Next, we re-create the default machine setting HTTP_PROXY and HTTPS_PROXY environment variables to the local host and the port we selected, and then refresh our shell environment:
docker-machine create default \
--engine-env HTTP_PROXY=http://localhost:8618 \
--engine-env HTTPS_PROXY=http://localhost:8618
eval $(docker-machine env)
Optionally, we could also set NO_PROXY environment variable to list hosts and/or wildcards (separated by ;) to which the daemon should connect directly, bypassing the proxy.
Step 3: Set up tinyproxy inside docker-machine VM
First, we will create two files in the /c/Users/username directory (this is where our tinyproxy binary should reside after Step 1 above) and then we'll copy them to the VM.
The first file is tinyproxy.conf, the exact syntax is documented on the Tinyproxy website, but the example below should have all the settings need:
# These settings can be customized to your liking,
# the port though must be the same we used in Step 2
listen 127.0.0.1
port 8618
user nobody
group nogroup
loglevel critical
syslog on
maxclients 50
startservers 2
minspareServers 2
maxspareServers 5
disableviaheader yes
# Here is the actual proxy selection, rules apply from top
# to bottom, and the last one is the default. More info on:
# https://tinyproxy.github.io/
upstream http proxy1.corp.example.com:80 ".foo.example.com"
upstream http proxy2.corp.example.com:80 ".bar.example.com"
upstream http proxy.corp.example.com:82
In the example above:
http://proxy1.corp.example.com:80 will be used to connect to URLs that end with "foo.example.com", such as http://www.foo.example.com
http://proxy2.corp.example.com:80 will be used to connect to URLs that end with "bar.example.com", such as http://www.bar.example.com, and
http://proxy.corp.example.com:80 will be used to connect all other URLs
It is also possible to match exact host names, IP addresses, subnets and hosts without domains.
The second file is as the shell script that will launch the proxy, its name must be bootlocal.sh:
#! /bin/sh
# Terminate on error
set -e
# Switch to the script directory
cd $(dirname $0)
# Launch proxy server
./tinyproxy -c tinyproxy.conf
Now, let's connect to the docker VM, get root, and switch to boot2docker directory:
docker-machine ssh
sudo -s
cd /var/lib/boot2docker
Next, we'll copy all three files over and a set their permissions:
cp /c/Users/username/boot2docker/{tinyproxy{,.conf},bootlocal.sh} .
chmod 755 tinyproxy bootlocal.sh
chmod 644 tinyproxy.conf
Exit VM session by pressing Ctrl+D twice and restart it:
docker-machine restart default
That's it! Now docker should be able pull and push images from different URLs automatically selecting the right proxy server.
I'm getting the same thing every time trying to run busybox either with docker on fedora 20 or running boot2docker in VirtualBox:
[me#localhost ~]$ docker run -it busybox Unable to find image
'busybox:latest' locally Pulling repository busybox FATA[0105] Get
https://index.docker.io/v1/repositories/library/busybox/images: read
tcp 162.242.195.84:443: i/o timeout
I can open https://index.docker.io/v1/repositories/library/busybox/images in a browser and sometimes without using a vpn tunnel so tried to set a proxy in the network settings to the proxy provided by Astrill when using VPN sharing but it will always time out.
Currently in China where there basically is no Internet due to the firewall, npm, git and wget seem to use the Astrill proxy in the terminal (when setting it in network setting of Fedora 20) but somehow I either can't get the docker daemon to use it or something else is wrong.
It seems the answer was not so complicated according to the following documentation (had read it before but thought setting proxy in network settings ui would take care of it)
So added the following to /etc/systemd/system/docker.service.d/http-proxy.conf (after creating the docker.service.d directory and conf file):
[Service]
Environment="HTTP_PROXY=http://localhost:3213/"
Environment="HTTPS_PROXY=http://localhost:3213/"
In the Astrill app (I'm sure other provider application provide something similar) there is an option for vpn sharing which will create a proxy; it can be found under settings => vpn sharing.
For git, npm and wget setting the proxy in the ui (gnome-control-center => Network => network proxy) is enough but when doing a sudo it's better to do a sudo su, set the env and then run the command needing a proxy, for example:
sudo su
export http_proxy=http://localhost:3213/
export ftp_proxy=http://localhost:3213/
export all_proxy=socks://localhost:3213/
export https_proxy=http://localhost:3213/
export no_proxy=localhost,127.0.0.0/8,::1
export NO_PROXY="/var/run/docker.sock"
npm install -g ...
I'd like to update the solution for people who still encounter this issue today
I don't know the details, but when using the wireguard protocol on Astrill, docker build and docker run will use the VPN. If for some reason it doesn't work, try restarting the docker service sudo service docker restart while the VPN is active
Hope it helps, I just wasted one hour trying to figure out why it stopped working