Starting firefox insde container makes new window in host firefox - docker

I am currently trying to make an docker container that connect to a VPN (via openforivpn) and the open a Firefox instance to use with that connection. When there is no Firefox running in my host, everything works fine, the container start the VPN connection and the open the Firefox application connected to my X server. But if I have my host Firefox is running, when I start the container it opens a new window on my host Firefox and the exit the container with the message:
feulo#branca:~/vpen-test$ docker-compose up
Recreating 07_complex_compose_openfortivpn_1 ... done
Attaching to 07_complex_compose_openfortivpn_1
07_complex_compose_openfortivpn_1 exited with code 0
Anyone kwonws how to fix it? Thanks for the helping
These are de Dockerfile and docker-compose.yml files
Dockerfile
# Use an official Debian Slim image
FROM debian:buster-slim
# Install needed packages
RUN apt update \
&& DEBIAN_FRONTEND="noninteractive" apt -y install dbus ppp openfortivpn iceweasel
docker-compose.yml
version: '3.2'
services:
openfortivpn:
working_dir: /workdir
build: .
privileged: true
devices:
- /dev/snd
volumes:
- .:/workdir
- /tmp/.X11-unix:/tmp/.X11-unix
environment:
- DISPLAY=unix$DISPLAY
command: sh entrypoint.sh
entrypoint.sh
openfortivpn -c dti.txt &
firefox

found the solution in Docker with shared X11 socket: Why can it "start" Firefox outside of the container?
just need to add -new-instance to Firefox

Related

Containerizing a Vue application does not display webpage on localhost

I'm trying to containerize my vue application created from vue-cli. I have a docker-compose.yml looking like this:
version: '3.8'
services:
npm:
image: node:current-alpine
ports:
- '8080:8080'
stdin_open: true
tty: true
working_dir: /app
entrypoint: [ 'npm' ]
volumes:
- ./app:/app
I have in the same directory the docker-compose.yml and the /app where the vue source code is located.
/vue-project
/app (vue code)
/docker-compose.yml
I install my node dependencies:
docker-compose run --rm npm install
They install correctly in the container as I see the folder appear in my host.
I am running this command to start the server:
docker-compose run --rm npm run serve
The server starts to run correctly:
App running at:
- Local: http://localhost:8080/
It seems you are running Vue CLI inside a container.
Access the dev server via http://localhost:<your container's external mapped port>/
Note that the development build is not optimized.
To create a production build, run npm run build.
But I cannot access it at http://localhost:8080/ from my browser. I've tried different ports, I've also tried to run the command like this:
docker-compose run --rm npm run serve --service-ports
But none of this works. I've looked at other dockerfiles but they are so different from mine, what am I exactly doing wrong here?
docker ps -a
is showing these:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cf0d5bc724b7 node:current-alpine "npm run serve --ser…" 21 minutes ago Up 21 minutes docker-compose-vue_npm_run_fd94b7dd5be3
ff7ac833536d node:current-alpine "npm" 22 minutes ago Exited (1) 22 minutes ago docker-compose-vue-npm-1
Your compose-file instructs docker to expose the container's port 8080 to the host's port 8080, yet your docker ps output shows the container is not listening.
Is it possible your containerized app is not listening on 8080?

Docker is extremely slow when running Laravel on Nginx container wsl2

I've updated Windows 10 to 2004 latest version, installed wsl2 and updated it, installed docker, and ubuntu.
When I create a simple index.php file with "Hello World" it's working perfectly ( response: 100-400ms ) but when I added my Laravel project it becomes miserable as it loads for 7sec before performing the request and the response is 4 - 7 seconds😢, even though PHPMyAdmin is running very smoothly ( response: 1 - 2 seconds ).
my docker-compose.yml file:
version: '3.8'
networks:
laravel:
services:
nginx:
image: nginx:stable-alpine
container_name: nginx
ports:
- "8080:80"
volumes:
- ./src:/var/www/html
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
depends_on:
- php
- mysql
- phpmyadmin
networks:
- laravel
mysql:
image: mysql:latest
container_name: mysql
restart: unless-stopped
tty: true
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: secret
SERVICE_TAGS: dev
SERVICE_NAME: mysql
networks:
- laravel
phpmyadmin:
image: phpmyadmin/phpmyadmin
restart: always
depends_on:
- mysql
ports:
- 8081:80
environment:
PMA_HOST: mysql
PMA_ARBITRARY: 1
php:
build:
context: .
dockerfile: Dockerfile
container_name: php
volumes:
- ./src:/var/www/html
ports:
- "9000:9000"
networks:
- laravel
composer:
image: composer:latest
container_name: composer
volumes:
- ./src:/var/www/html
working_dir: /var/www/html
depends_on:
- php
networks:
- laravel
npm:
image: node:latest
container_name: npm
volumes:
- ./src:/var/www/html
working_dir: /var/www/html
entrypoint: ['npm']
artisan:
build:
context: .
dockerfile: Dockerfile
container_name: artisan
volumes:
- ./src:/var/www/html
depends_on:
- mysql
working_dir: /var/www/html
entrypoint: ['php', '/var/www/html/artisan']
networks:
- laravel
I've been trying to fix this issue for 2 days but couldn't find the answer.
Thanks
It looks like you are mounting your Laravel project in your container. This could result in very poor file I/O if you are mounting these files from your Windows environment to WSL 2, since WSL 2 currently has a lot of problems accessing files that are on the Windows environment. This I/O issue exists as of July 2020, you can find the ongoing status of the issue on Github here.
There are three possible solutions I can think of that will resolve this issue for now.
Disable WSL 2 based engine for docker until the issue is resolved
Since this issue only occurs when WSL 2 tries to access the Windows filesystem, you could choose to disable WSL 2 docker integration and run your containers on your Windows environment instead. You can find the option to disable it in the UI of Docker Desktop here:
Store your project in the Linux filesystem of WSL 2
Again, since this issue occurs when WSL 2 tries to access the mount points of the Windows filesystem under /mnt, you could choose to store your project onto the Linux filesystem of WSL 2 instead.
Build your own Dockerfiles
You could choose to create your own Dockerfiles and instead of mounting your project, you can COPY the desired directories into the docker images instead. This would result in poor build performance, since WSL 2 will still have to access your Windows filesystem in order to build these docker images, but the runtime performance will be much better, since it won't have to retrieve these files from the Windows environment everytime.
You just move all source project to folder
\\wsl$\Ubuntu-20.04\home\<User Name>\<Project Name>
The speed will be to very fast such run on Linux Native
Before
After
I experienced the same problem with mysql database requests and responses taking about 8 to 10 seconds for each request/response. The problem definitely relates to the mounting of files between the Windows file system and Windows WSL2. After many days of trying to resolve this issue, I found this post:
https://www.createit.com/blog/slow-docker-on-windows-wsl2-fast-and-easy-fix-to-improve-performance/
After implementing the steps specified in the post, it totally eliminated the problem, reducing database requests/responses to milliseconds. Hopefully this will assist someone experiencing the same issue.
Ok, so i got an interesting fact :))
Running docker on windows without WSL2.
A request has TTFB 5.41s. This is the index.php file. I used die() to check where the time is bigger and i found that if i use die() after terminate, the TTFB becomes ~2.5s.
<?php
/**
* Laravel - A PHP Framework For Web Artisans
*
* #package Laravel
* #author Taylor Otwell <taylor#laravel.com>
*/
define('LARAVEL_START', microtime(true));
require __DIR__.'/../../application/vendor/autoload.php';
$app = require_once __DIR__.'/../../application/bootstrap/app.php';
#die(); <-- TTFB 1.72s
$kernel = $app->make(Illuminate\Contracts\Http\Kernel::class);
$response = $kernel->handle(
$request = Illuminate\Http\Request::capture()
);
$response->send();
#die(); <-- TTFB 2.67s
$kernel->terminate($request, $response);
#die(); <-- TTFB 2.74s
#if there is no die in the file then TTFB is ~6s
You are running your project on the /mnt/xxx folder, isn't it?
This is because wsl2 filesystem performance is much slower than wsl1 in /mnt.
If your want a very short solution, it is here. Works on Ubuntu 18.04 and Debian from windows store:
Go to the docker settings and turn on Expose daemon on tcp://localhost:2375 without TLS and turn off Use the WSL 2 based engine.
Run this command:
clear && sudo apt-get update && \
sudo curl -fsSL https://get.docker.com -o get-docker.sh && sudo sh get-docker.sh && sudo usermod -aG docker $USER && \
sudo curl -L "https://github.com/docker/compose/releases/download/1.26.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose && sudo chmod +x /usr/local/bin/docker-compose && \
echo "export PATH=\"$PATH:$HOME/.local/bin\"" >> ~/.profile && source ~/.profile && \
echo "export DOCKER_HOST=tcp://localhost:2375" >> ~/.bashrc && source ~/.bashrc && \
printf '[automount]\nroot = /\noptions = metadata' | sudo tee -a /etc/wsl.conf
I wrote the instuction how to integrate the Docker Desktop with WSL 1:
https://github.com/CaliforniaMountainSnake/wsl-1-docker-integration
I was facing the same issue with Laravel/Docker/Nginx on Windows 11.
I couldn't disable "Use the WSL 2 based engine", because it was grey out, even after installing Hyper-v on Windows 11 Home (tweak).
Here is the best solution I found:
1 . Copy your project into your WSL folder
Open Windows explorer and type the following address :
\wsl.localhost\
Select your WSL instance, and then you can copy your project into /home/yourUsername/
The full URL will be something like :
\wsl.localhost\ubuntu\home\username\yourProject
2.Start docker containers
Just open a terminal from this folder, and start your containers
eg: docker-compose up -d
3.Visual Studio Code
to open the project folder from visual studio :
CTRL + P, >Remote-WSL : Open folder in WSL...
to open the project folder from the command line :
code --remote wsl+Ubuntu /home/username/yourProject
You can exclude vendor folder from compose file
like
volumes:
- ./www:/var/www
- vendor:/var/www/vendor
This is really sketchy way of improving the speed but here is goes:
Problem
The speed of loading composer dependencies from a vendor dir, which is mapped from project root on Windows to a docker container via WSL 2, is currently reaaaaaly slow.
Solution
Copy vendor dir to docker image and use it as opposed to the mapped one in project root.
The project structure using MySQL database and Apache PHP 7.4 with composer auto-loading looks like this:
db
- init.sql
dev
- db
- Dockerfile
- data.sql
- www
- Dockerfile
- vendor-override.php
- docker-compose.yaml
src
- ...
vendor
- ...
composer.json
index.php
...
The idea here is to keep dev stuff separated from the main root dir.
dev/docker-compose.yaml
version: '3.8'
services:
test-db:
build:
context: ../
dockerfile: dev/db/Dockerfile
test-www:
build:
context: ../
dockerfile: dev/www/Dockerfile
ports:
- {insert_random_port_here}:80
volumes:
- ../:/var/www/html
Here we have two services, one for MySQL database and one for Apache with PHP, which maps web root /var/www/html to our project root. This enables Apache to see the project source files (src and index.php).
dev/db/Dockerfile
FROM mysql:5.7.24
# Add initialize script (adding 0 in front of it, makes sure it is executed first as the scripts are loaded alphabetically)
ADD db/init.sql /docker-entrypoint-initdb.d/0init.sql
# Add test data (adding 99 infront of it, makes sure it is executed last)
ADD dev/db/data.sql /docker-entrypoint-initdb.d/99data.sql
dev/www/Dockerfile
FROM php:7.4.0-apache-buster
# Install PHP extensions and dependencies required by them
RUN apt-get update -y & \
apt-get install -y libzip-dev libpng-dev libssl-dev libxml2-dev libcurl4-openssl-dev & \
docker-php-ext-install gd json pdo pdo_mysql mysqli ftp simplexml curl
# Enable apache mods and .htaccess files
RUN a2enmod rewrite & \
sed -e '/<Directory \/var\/www\/>/,/<\/Directory>/s/AllowOverride None/AllowOverride All/' -i /etc/apache2/apache2.conf
# Add composer to improve loading speed since its located inside linux
ADD vendor /var/www/vendor
ADD dev/www/vendor-override.php /var/www/
RUN chmod -R 777 /var/www & \
mkdir /var/www/html/src & \
ln -s /var/www/html/src /var/www/src
# Expose html dir for easier deployments
VOLUME /var/www/html
I'm using official Apache buster image with PHP 7.4.
Here we copy vendor dir and vendor-override.php to a dir above the webroot (/var/www) so it doesn't interfere with project root.
ADD vendor /var/www/vendor
ADD dev/www/vendor-override.php /var/www/
Next we set the read write execute permissions for everybody so Apache can read it. This necessary because its outside the webroot.
chmod -R 777 /var/www
Now the trick here is to make sure the composer auto-loads classes from src dir. This is solved by creating a link from /var/www/src/ to /var/www/html/src which is in our project root.
mkdir /var/www/html/src
ln -s /var/www/html/src /var/www/src
dev/www/vendor-override.php
# Override default composer dependencies to be loaded from inside docker. This is used because
# loading files over mapped volumes is really slow on WSL 2.
require_once "/var/www/vendor/autoload.php";
Simply use the vendor dir inside docker image.
index.php
$fixFile = "../vendor-override.php";
if (file_exists($fixFile))
require_once $fixFile;
else
require_once "vendor/autoload.php";
...
If vendor-override.php file is detected it is used instead of the one from the project root. This makes sure the index.php loads dir inside of the docker image which is way faster.
composer.json
{
"autoload": {
"psr-4": {
"Namespace\\": ["src"]
}
},
...
}
Simple autoloading setup maps "Namespace" to src dir in project root.
Key things to note
index.php loads vendor-override.php instead of vendor from proect root
PSR-4 autoloading is solved by linking
Downside
The downside is you have to build docker image every time you update dependencies.
I just enabled Hyper-V from powershell
DISM /Online /Enable-Feature /All /FeatureName:Microsoft-Hyper-V
Restarted and now it works reasonably good. I have not changed anything else whatsoever.
See more info about how to enable Hyper-V Here
By default, opcache is not enabled in docker container php:8.0-apache.
Add in dockerfile:
RUN docker-php-ext-install opcache
COPY ./opcache.ini /usr/local/etc/php/conf.d/opcache.ini
Create file opcache.ini:
[opcache]
opcache.enable=1
; 0 means it will check on every request
; 0 is irrelevant if opcache.validate_timestamps=0 which is desirable in production
opcache.revalidate_freq=0
opcache.validate_timestamps=1
opcache.max_accelerated_files=10000
opcache.memory_consumption=192
opcache.max_wasted_percentage=10
opcache.interned_strings_buffer=16
opcache.fast_shutdown=1

Docker containers are not able to see each other in docker compose, nor is host

There are couple of issues that I have encountered while working with Docker.
Host (Windows 10) is not able to access any resources hosted by containers.
Containers themselves are also not seeing each other.
My config:
version: '3.7'
services:
chrome:
build: ./chrome
container_name: chrome_container
ports:
- "9223:9223"
dotnetapp:
depends_on:
- chrome
build: ./dotnetapp
container_name: live_container
environment:
- ASPNETCORE_ENVIRONMENT=Production
stdin_open: true
tty: true
Dockerfile for chrome (all it does - starts Chrome with debugging, enabled on port 9223):
FROM debian:buster-slim
# preparation
RUN apt-get update; apt-get clean
RUN apt-get install -y wget
RUN apt-get install -y gnupg2
# installing xvfb
RUN apt-get install -y xvfb
# installing Chrome
RUN wget -q -O - https://dl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN sh -c 'echo "deb http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list'
RUN apt-get update && apt-get install -y google-chrome-beta
EXPOSE 9223
COPY ./docker-entrypoint.sh /usr/local/bin/
RUN ln -s usr/local/bin/docker-entrypoint.sh / # backwards compat
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["test"]
docker-entrypoint.sh
#!/bin/bash
set -e
if [ "$1" = 'test' ]; then
rm -f /tmp/.X99-lock
fi
xvfb-run -e /dev/stdout --server-args='-screen 0, 2560x1440x16' google-chrome --window-size=2560,1440 --no-sandbox --remote-debugging-port=9223 --user-data-dir=remote-profile https://www.google.com/
Dockerfile for second app (is used just for docker internal network testing)
FROM mcr.microsoft.com/dotnet/core/runtime:2.2
# run app
CMD ["/bin/bash"]
So, now, to elaborate on point 1:
I can launch chrome container by running: docker-compose up --build chrome.
On host system I then try to open a browser either localhost:9223 or http://172.17.0.2:9223/ and in both cases I get "page could not be reached error". P.s. I got IP from docker inspect command.
On the other hand, if I try to go into running container docker exec -it chrome_container bash and execute command curl localhost:9223 then it shows a SUCCESS result.
At this point if I try using other address like curl chrome:9223 or curl http://chrome:9223 orcurl chrome_container:9223 or curl http://chrome_container:9223 then they also FAIL. According to documentation - all containers within internal network shall be accessible by the service host name. This is simply false in my scenario.
I also tried starting the image without relying on docker compose like this docker run -p 9223:9223 -it --rm all_chrome but the result is the same. The resource is not available from within host system.
Now, to elaborate on issue 2.
When I run both applications like this docker-compose up. And log-into second app by docker exec -it live_container bash. And then try to access first container using above mentioned URLS - all of them fail (curl localhost:9223 or curl 0.0.0.0:9223 or curl chrome:9223 or curl http://chrome:9223 orcurl chrome_container:9223 or curl http://chrome_container:9223).
I tried restarting Docker a couple of times and tried different ports. How can I figure out these things?
How to access resource at 9223 port at Host system?
Why isn't the second service able to see the first service using the host name, as documented here?
I am using Docker on Windows 10.
EDIT: More details.
When accessing by localhost - following error is showed:
When accessing by IP- following error is showed:
So it seems like something is happening when accessing by localhost on a Host system (win 10).
Just found the information in a topic, that Chrome simply does not accept connections from outside of localhost network while in debug mode.
So, starting the container as:
docker run -p 5656:5656 -it --rm all_chrome
And then to solve the issue I have to use a proxy. Here is an example:
socat tcp-listen:5656,fork tcp:localhost:9223
After that - accessing through localhost works fine:
I am yet to figure out how to start socat in daemon mode so I can make it a part of startup script in docker.. but that's a minor thing.
Edit. Couple of notes.
Accessing Chrome container from another container
If you tried to access the debug session from another container using the IP, then that would work just fine:
curl 172.19.0.2:5656
Otherwise, if you tried to use a host name - you would see an error.
curl chrome:5656
error:
Host header is specified and is not an IP address or localhost.root#38f2b5fa34ca:/app# curl chrome:5656
This can be sovled, by stripping down Host header value:
curl chrome:5656 -H 'Host: '
The problem, still, is that this is not very convenient approach. Because application do not always have the possibility to remove the header from the request. For exampe - a chromedriver. Therefore, a solution would be to make a configuration on chrome docker container, that would remove Host header value for all incoming requests.. I tried doing it using squid proxy, but without success.. So, instead, I came up with another solution.
The solution is to use a static, fixed IP address for containers' internal network.. In that way - you can always be sure that docker will be using the same address and therefore, your application can use it in their calls.
The config is following:
version: '3.7'
services:
chrome:
networks:
front:
ipv4_address: 172.16.238.5
app2:
networks:
front:
ipv4_address: 172.16.238.10
networks:
front:
driver: bridge
ipam:
config:
- subnet: 172.16.238.0/24
Now in app2 you can make calls using the fixed IP:
CURL 172.16.238.5:5656
This is so far the easiest solution..
p.s. If you are running chrome in headless mode, then you can provide additional argument which in theory will solve some of your problems --remote-debugging-address. Try it.

Access port of one container from another container

I have a postgres database in one container, and a java application in another container. Postgres database is accessible from port 1310 in localhost, but the java container is not able to access it.
I tried this command:
docker run modelpolisher_java java -jar ModelPolisher-noDB-1.7.jar --host=biggdb --port=5432 --user=postgres --passwd=postgres --dbname=bigg
But it gives error java.net.UnknownHostException: biggdb.
Here is my docker-compose.yml file:
version: '3'
services:
biggdb:
container_name: modelpolisher_biggdb
build: ./docker/bigg_docker
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=bigg
ports:
- "1310:5432"
java:
container_name: modelpolisher_java
build: ./docker/java_docker
stdin_open: true
tty: true
Dockerfile for biggdb:
FROM postgres:11.4
RUN apt update &&\
apt install wget -y &&\
# Create directory '/bigg_database_dump/' and download bigg_database dump as 'database.dump'
wget -P /bigg_database_dump/ https://modelpolisher.s3.ap-south-1.amazonaws.com/bigg_database.dump &&\
rm -rf /var/lib/apt/lists/*
COPY ./scripts/restore_biggdb.sh /docker-entrypoint-initdb.d/restore_biggdb.sh
EXPOSE 1310:5432
Can somebody please tell what changes I need to do in the docker-compose.yml, or in the command, to make java container access ports of biggdb (postgres) container?
The two containers have to be on the same Docker-internal network to be able to talk to each other. Docker Compose automatically creates a network for you and attaches containers to that network. If you're docker run a container alongside that, you need to find that network's name.
Run
docker network ls
This will list the Docker-internal networks you have. One of them will be named something like bigg_default, where the first part is (probably) your current directory name. Then when you actually run the container, you can attach to that network with
docker run --net bigg_default ...
Consider setting a command: in your docker-compose.yml file to pass these arguments when you docker-compose up. If the --host option is your code and doesn't come from a framework, passing settings like this via environment variables can be a little easier to manage than command-line arguments.
As you use docker-compose to bring up the two containers, they already share a common network. To be able to access that you should use docker-compose run and not docker run. Also, pass the service name (java) and not the container name (modelpolisher_java) in docker-compose run command.
So just use the following command to run your jar:
docker-compose run java java -jar ModelPolisher-noDB-1.7.jar --host=biggdb --port=5432 --user=postgres --passwd=postgres --dbname=bigg

docker-compose with a war via volume fails on digital ocean debian but not other debian distros (my home box, work box as well)

on digitalocean:
if i use docker compose with a tomcat container that has the war i am trying to use already in webapps via the tomcat container, compose works...
Dockerfile for Tomcat container with built in war (works):
FROM clegge/java7
MAINTAINER christopher.j.legge#gmail.com
RUN wget http://www.eu.apache.org/dist/tomcat/tomcat-7/v7.0.65/bin/apache-tomcat-7.0.65.tar.gz
RUN tar -xzf apache-tomcat-7.0.65.tar.gz
RUN mv apache-tomcat-7.0.65 /opt/tomcat7
RUN rm apache-tomcat-7.0.65.tar.gz
RUN echo "export CATALINA_HOME=\"/opt/tomcat7\"" >> ~/.bashrc
RUN sed -i '/<\/tomcat-users>/ i\<user username="test" password="test~" roles="admin,manager-gui,manager-status"/>' /opt/tomcat7/conf/tomcat-users.xml
VOLUME /opt/tomcat7/webapps
ADD app.war /opt/tomcat7/webapps/app.war
EXPOSE 8080
CMD /opt/tomcat7/bin/startup.sh && tail -f /opt/tomcat7/logs/catalina.out
if i use that container with docker compose, everything works great!
if i try to push the war file in via my docker-compose yml the contianer goes up and never inflates the war...
tomcat:
image: tomcat:8.0
ports:
- "8080:8080"
volumes:
- base-0.2.war:/usr/local/tomcat/webapps/base.war
links:
- postgres
postgres:
image: clegge/postgres
ports:
- "5432:5432"
environment:
- DB_USER_NAME=test_crud
- DB_PASSWORD=test_crud
- DB_NAME=base_db
no error in the logs...
this also fails on my godaddy box... I have tried every flavor of linux on digital ocean... nothing works... i think it is a problem because the digital ocean instance is a container itself... any thoughts?

Resources