I am writing a Rails application that supports the upload of images using ActiveStorage. I'm trying to write a system test, which is running in a Firefox. Firefox is being driven by Selenium over a network; ultimately it is installed inside a docker container.
I can write a system test that runs, and passes. The docker image I'm using supports interactively viewing the tests through OpenVNC - if you navigate to http://localhost:7900 in a browser, you can watch the tests run and interact with the test browser inside your host system browser.
The test I've written uploads a file, and checks that the file is uploaded by going to its dedicated "show" page. The page loads, and renders an image tag like
<img src="http://web:37193/rails/active_storage/blobs/redirect/eyJfcmFpbHMiOnsibWVzc2FnZSI6IkJBaHBVZz09IiwiZXhwIjpudWxsLCJwdXIiOiJibG9iX2lkIn19--4c0a214932e528a8cea25a8f6bd6aad7b1e19994/rails-logo.svg">
Unfortunately, the image is blank in the browser and firefox reports that it "could not load the image". The source is definitely correct, and the network appears to be able to access it. If I add a break statement, and visit the given url in the test browser, it downloads the image just fine. By the way, it works fine in my development environment too. "37193" is the port Capybara is running on. "web" is the name of my rails container - I'll share my docker-compose below.
In my spec/rails_helper.rb,
Capybara.server_host = '0.0.0.0'
Capybara.app_host = "http://#{ENV.fetch("HOSTNAME")}"
Capybara.default_host = "http://#{ENV.fetch("HOSTNAME")}"
config.before(:each, type: :system) do
# To watch the system tests interactively, visit localhost:7900
driven_by :selenium, using: :firefox, screen_size: [1000,1000], options: { browser: :remote, url: 'http://firefox:4444' }
default_url_options[:host] = "#{ENV.fetch("HOSTNAME")}"
end
and docker-compose.yml,
version: "3.9"
services:
db:
image: mysql
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: app
MYSQL_USER: user
MYSQL_PASSWORD: password
web:
build: .
container_name: web
hostname: web
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/web
- gems:/usr/local/bundle/
ports:
- "3000:3000"
depends_on:
- db
- firefox
tty: true
stdin_open: true
environment:
DB_USER: root
DB_NAME: app
DB_PASSWORD: password
DB_HOST: db
firefox:
image: selenium/standalone-firefox
container_name: firefox
ports:
- "7900:7900"
- "4444:4444"
shm_size: "2gb"
restart: unless-stopped
volumes:
gems:
The issue turned out to be that I was using an SVG in tests. I only used a PNG when testing in my development environment, so never noticed the difference.
An image with src that is an SVG is valid. However, the image will only be correctly displayed if it has the content type header correctly set to be an image. To prevent XSS, ActiveStorage will serve SVGs as application/octet-stream. For the case I'm testing (as the src for an img tag) that's unnecessary, because scripts are disabled in this case. But if a user opens the image in a new tab, they'd be vulnerable.
Related
I have docker-compose.yml on my local machine like below:
version: "3.3"
services:
api:
build: ./api
volumes:
- ./api:/api
ports:
- 3000:3000
links:
- mysql
depends_on:
- mysql
app:
build: ./app
volumes:
- ./app:/app
ports:
- 80:80
mysql:
image: mysql:8.0.27
volumes:
- ./mysql:/var/lib/mysql
tty: true
restart: always
environment:
MYSQL_DATABASE: db
MYSQL_ROOT_PASSWORD: qwerty
MYSQL_USER: db
MYSQL_PASSWORD: qwerty
ports:
- '3306:3306'
The api is NestJS app, app, mysql - Angular and Mysql respectively. And I need to work with this one localy.
How could I make so, that any my changes will be applied without rebuilding containers every time?
You don't have to build an image for a development environment with your sources in it. For NestJS, and since you're using Docker (I voluntary specify this because it exists other container runtimes), you can simply run a NodeJS image from the Docker main registry: https://hub.docker.com/_/node.
You could run it with:
docker run -d -v ./app:/app node:12-alpine /app/index.js
N.B.: I choose 12-alpine for the example. I imagine the file to start your app is index.js, replace it with yours.
You must consider to install the node dependencies yourself and they must be in the ./app directory.
For docker-compose, it could look like this:
version: "3.3"
services:
app:
image: node:12-alpine
command: /app/index.js
volumes:
- ./app:/app
ports:
- "80:80"
Same way for your API project.
For a production image, it is still suggested to build the image with the sources in it.
Say you're working on your front-end application (app). This needs to make calls out to the other components, especially api. So you can start the things it depends on, but not the application itself
docker-compose up -d api
Update your application configuration for this different environment; if you would have proxied to http://api:3000 before, for example, you need to change this to http://localhost:3000 to connect to the container's published ports:.
Now you can develop your application totally normally, without doing anything Docker-specific.
# outside Docker, on your normal development workstation
yarn run dev
$EDITOR src/components/Foo.tsx
You might find it convenient to use environment variables for these settings that will, well, differ per environment. If you're developing the back-end code but want to attach a live UI to it, you'll either need to rebuild the container or update the front-end's back-end URL to point at the host system.
This approach also means you do not need to bind-mount your application's code into the container, and I'd recommend removing those volumes: blocks.
I'm trying to setup development environment using PhpStorm, Docker on Windows 10 machine.
When remote PHP interpreter selected PhpStorm stops responding with message "Checking PHP Installation":
docker-compose.yaml
version: '3'
networks:
symfony:
services:
#nginx
nginx-ea:
image: nginx:stable-alpine
container_name: nginx-ea
ports:
- "8081:80"
volumes:
- ./app:/var/www/project
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
depends_on:
- php74-fpm
- mysql8-ea
networks:
- symfony
# php74
php74-fpm:
build:
context: .
dockerfile: ./php/Dockerfile
container_name: php74-fpm
ports:
- "9001:9000"
volumes:
- ./app:/var/www/project
- ./php/conf:/usr/local/etc/php/
networks:
- symfony
php74-cli:
# define the directory where the build should happened,
# i.e. where the Dockerfile of the service is located
# all paths are relative to the location of docker-compose.yml
build:
context: ./php-cli
container_name: php74-cli
# reserve a tty - otherwise the container shuts down immediately
# corresponds to the "-i" flag
tty: true
# mount the app directory of the host to /var/www in the container
# corresponds to the "-v" option
volumes:
- ./app:/var/www/project
# connect to the network
# corresponds to the "--network" option
networks:
- symfony
# mysql 8
mysql8-ea:
image: mysql:8
container_name: mysql8-ea
ports:
- "4309:3306"
volumes:
- ./mysql:/var/lib/mysql
command: --default-authentication-plugin=mysql_native_password --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
restart: always # always restart unless stopped manually
environment:
MYSQL_USER: root
MYSQL_ROOT_PASSWORD: secret
MYSQL_PASSWORD: secret
networks:
- symfony
#PhpMyAdmin
phpmyadmin-ea:
image: phpmyadmin/phpmyadmin:5.0.1
container_name: phpmyadmin-ea
restart: always
environment:
PMA_HOST: mysql8-ea
PMA_USER: root
PMA_PASSWORD: secret
ports:
- "8090:80"
networks:
- symfony
Docker Desktop Windows 10 settings
Have tried selecting both php74-fpm container and php74-cli container, as soon as settings applied in PhpStorm it stops responding completely.
Any idea what is wrong in here?
UPDATE
Including PHPStorm logs from system\log\idea.log
# appears in logs when Remote PHP Interpreter settings applied
2020-11-27 09:14:00,859 [ 479670] DEBUG - php.config.phpInfo.PhpInfoUtil - Loaded helper: /opt/.phpstorm_helpers/phpinfo.php
2020-11-27 09:14:01,106 [ 479917] INFO - .CloudSilentLoggingHandlerImpl - Creating container...
2020-11-27 09:14:02,019 [ 480830] INFO - .CloudSilentLoggingHandlerImpl - Creating container...
2 Docker containers are created after Remote PHP Interpreter settings applied however they don't seem to be activating and logs inside the container doesn't seem to say anything
if try starting "phpstorm_helpers_PS-191.8026.56" container manually from docker desktop it seems to start ok
if I manually try to start "festive_zhukovsky..." container it doesn't start.
logs inside container prints xml:
https://drive.google.com/file/d/1d5XbkJdHnc7vuN0V7heJdx3lBkmJfs3V/view?usp=sharing
UPDATE 2
if i remove local PHP version which comes from xampp package in PHPStorm the windows on the right shows where PHPStorm is hanging and become unresponsive:
UPDATE 3
according to this article https://ollyxar.com/blog/docker-phpstorm-windows
Docker should have Shared Drives configuration
however I don't seem to have this option in Docker Desktop Settings:
Can this be a problem?
I have a Symfony app (v4.3) running in an docker setup. This setup also has a container for the mailcatcher. No matter how I try to set the MAILER_URL in the .env file no mail shows up in the mailcatcher. If I just the call regular PHP mail() function the mails pops up in the mail catcher. The setup has been used for other projects as well and it worked without a flaw.
Only with the Symfony Swiftmailer I can't the mails.
My docker-compose file looks like this:
version: '3'
services:
#######################################
# PHP application Docker container
#######################################
app:
container_name: project_app
build:
context: docker/web
networks:
- default
volumes:
- ../project:/project:cached
- ./etc/httpd/vhost.d:/opt/docker/etc/httpd/vhost.d
# cap and privileged needed for slowlog
cap_add:
- SYS_PTRACE
privileged: true
env_file:
- docker/web/conf/environment.yml
- docker/web/conf/environment.development.yml
environment:
- VIRTUAL_HOST=.project.docker
- POSTFIX_RELAYHOST=[mail]:1025
#######################################
# Mailcatcher
#######################################
mail:
image: schickling/mailcatcher
container_name: project_mail
networks:
- default
environment:
- VIRTUAL_HOST=mail.project.docker
- VIRTUAL_PORT=1080
I pleayed around with the MAILER_URL setting hours but everything failed so far.
Hope soembody here has an idea how to set the MAILER_URL.
Thank you
According to docker-compose.yml, your MAILER_URL should be:
smtp://project_mail:1025
Double-check if it has the correct value
Then you can view container logs using
docker-compose logs -f mail to see if your messages reach the service at all.
It will be something like:
==> SMTP: Received message from '<user#example.com>' (619 bytes)
Second: try to restart your containers. Sometimes changes in .env* files are not applied instantly.
I'm trying to get a docker ready VUE application setup and ready to go using the UI tool the vue framework supplies; (the command vue ui will start the UI which is then accessed via the web browser).
I was able to get a project setup successfully but only by using the command line method using vue create app-name-here and going through the prompts. Below is an image of it working that way.
I wanted to use the VueUI so I could follow along on some learning tutorials regarding this and to explore it's features but for some reason I can't seem to get it to work.
As you can see in the images I uploaded below it says it's ready on port :8000 and the docker-compose file is indeed set to be open on the port 8000 and also confirmed via docker ps command as seen below.
I can also verify that port 8000 is not being used by another process on my main computer (lsof -i tcp:8000). This command just shows that only docker is using it as it should.
As you can see I have done everything in my power to ensure the port is open but when I go to the web browser to see the UI all I see is it can't be found, which is strange because the default project works just fine.
How can I get the Vue UI to work through docker like this?
NOTE
I start the vue ui server after I run docker-compose by executing into the container like this.
docker exec -it front_end_node /bin/bash
from there I can simply run vue ui, which is what you see in the screenshots above.
Docker-Compose File
version: "3.7"
services:
dblive:
image: mysql:8.0
volumes:
- ./db_data_live:/var/lib/mysql
- ./database_config/custom:/etc/mysql/conf.d
restart: always
environment:
MYSQL_ROOT_PASSWORD: 123456
MYSQL_DATABASE: live
MYSQL_USER: someadmin
MYSQL_PASSWORD: somepassword
MYSQL_ROOT_HOST: '%'
dbdev:
image: mysql:8.0
volumes:
- ./db_data_dev:/var/lib/mysql
- ./database_config/custom:/etc/mysql/conf.d
restart: always
environment:
MYSQL_ROOT_PASSWORD: 123456
MYSQL_DATABASE: dev
MYSQL_USER: someadmin
MYSQL_PASSWORD: 123456
MYSQL_ROOT_HOST: '%'
phpmyadmin:
depends_on:
- dblive
- dbdev
image: phpmyadmin/phpmyadmin:latest
environment:
PMA_ARBITRARY : 1
restart: always
volumes:
- ./phpmyadmin/config.user.inc.php:/etc/phpmyadmin/config.user.inc.php
ports:
- "8081:80"
front_end_node:
image: "poolservice/callcenter:1.0"
container_name: front_end_node
depends_on:
- dblive
- dbdev
user: "node"
working_dir: /home/node/app
environment:
#- NODE_ENV=production
- NODE_ENV=development
volumes:
- ./app/call-center:/home/node/app
ports:
#Standard HTTP Dev Port
- "8080:8080"
#Vue UI Port
- "8000:8000"
#SSH Port
- "443:443"
# Tail command prints outputs from a process (either all or specified amount)
#-F allows realtime streaming output of the changing file which is how it keeps it running so docker does not quit!
#tail -F command here
command: "/bin/sh -c 'cd call-center && npm run serve'"
The Docker File
# Use base image node 12
FROM node:12.9.1
# Set working directory in the container
WORKDIR /home/node/app
# Install the loopback client so we have access to it's commands.
RUN npm install -g #vue/cli
# Expose port 3000,8080 and 8000
EXPOSE 3000
EXPOSE 8080
EXPOSE 8000
New Things Recently Tried
I got this to work on my normal host and noticed it was 8001 as the port, so I tried this port instead, still didn't work though.
When I executed into the container I tried vue ui --port 8001 to ensure it was on that port and still no luck even after ensuring it was open.
I did notice that on my mac it tries to load the browser up. In a docker container this is not possible. It also tries to get access to the files, so I'm not sure if this has something to do with it not working or not. I will be investigating further...
Okay so I finally figured out what the issue was. After typing vue ui --help I looked at a list of options.
vue ui --headless --port 8000 --host 0.0.0.0
I experimented with starting the command this way and discovered that you want to run in headless mode and in particular the host has to be 0.0.0.0. By default it was localhost which does not work with docker!
I hope this helps someone else trying to use the UI in Docker!
in package.json i was set this line and problem solved:
"serve": "vue-cli-service serve --host 0.0.0.0",
I have ported my wordpress site locally into Docker container, the home page is working fine, here is my folder structure
/
-docker-compose.yml
-src (I have copied my wordpress code from production to this folder)
-db (It contains the db dump file)
My docker-compose.yml file is like this
version: '2'
services:
wordpress:
image: wordpress
ports:
- 8080:80
environment:
WORDPRESS_DB_NAME: wordpress_wp
WORDPRESS_DB_USER: root
WORDPRESS_DB_PASSWORD: mypw
volumes:
- ./src:/var/www/html
mysql:
image: mariadb
environment:
MYSQL_ROOT_PASSWORD: mypw
volumes:
- ./db/my_wp.sql:/docker-entrypoint-initdb.d/my_wp.sql
Then I ran the command docker-compose up and site is accessible at http://localhost:8080/
but the issue is when I click on some menu, it redirects me to my production site i.e. http://my-production-site/contact-us
How can I fix the urls automatically?
It sounds as though either your theme is using absolute URLs or you haven't changed the WordPress "site URL" to be localhost, hence it redirecting you to the production site upon clicking any menus.
A couple of options here:
change your links within your theme to be relative
modify your site URL within WordPress to be localhost