How to run protractor on docker? - docker

I'm a newb with docker & protractor so please bear with me.
I have an app that uses python and django for its backend API, and angular.js for its frontend and e2e test with protractor. So this is how I think I should proceed:
I must set a docker container for my backend which is in Python-Django, then expose this API through some PORT.
Create another container (or a layer not sure which) for the angular.js frontend.
Download an image for protractor and build the container.
Connect all of this containers layer through docker network?
Alternative
Run backend on local machine.
Create docker container for protractor and somehow point the e2e test to the container?
Please help me review the steps to achieve this. This video gives some insight but not sure where to start.

Your initial idea is just about right. When setting this up, I typically use a docker-compose file like so...
#docker-compose.yml
version: '2'
services:
backend:
build: ./backend
command: <your django startup command>
db:
image: <postgres or whatever>
frontend:
build: ./frontend
command: <npm start or equivalent>
ports:
- "80:80"
Then, I would run my tests with
docker-compose run --rm frontend <MY TESTING COMMAND HERE>
Docker-compose handles the docker networking stuff for you- in that case your frontend would be able to access your backend at http://backend:. Protractor and npm and all that fun stuff is installed in your frontend container.
The one major pain point that you haven't thought of yet is that protractor requires a display to work- it won't work with a headless browser like phantomjs, which your docker containers will usually not provide. This repo is an example of how to install a real browser and provide it a fake display so that it will work in a container... https://github.com/mark-adams/docker-chromium-xvfb, basically replace the chrome startup script with a shell script that starts an xvfb interface and attaches the browser to it.

Related

How to run scraper code in a Docker container next to a standalone Selenium container?

I am currently re-deploying a simple Python project from my local Windows to an Azure remote host with the help of Docker Compose (I’m totally new to this).
The problem is that some parts (a scraper) of the main logic rely on Selenium and Chrome driver. However, I don’t want to do an extraction and place this scraper logic out as I hope the main logic could be set up in a single Celery container.
I am wondering if I could achieve this by integrating a parallel stand-alone Selenium container in the docker-compose.yml file. Like:
services:
chrome:
image: selenium/standalone-chrome:latest
hostname: chrome
main_data_tasks:
build:
command: 'celery -A task_proj worker'
links:
- chrome
depends_on:
- chrome
If that’s possible, how do I introduce that particular Selenium container’s “context” into the main Celery container where all the code is running? Or this is a dead-end, and I will need to call the Selenium container from the Celery one?

Push image built with docker-compose to dockerhub

I have a golang script which interacts with postgres. Created a service in docker-compose.yml for both golang and postgre. When I run it locally with "docker-compose up" it works perfect, but now I want to create one single image to push it to my dockerhub so it can be pulled and ran with just "docker run ". What is the correct way of doing it?
Image created by "docker-compose up --build" launches with no error with "docker run " but immediately stops.
docker-compose.yml:
version: '3.6'
services:
go:
container_name: backend
build: ./
volumes:
- # some paths
command: go run ./src/main.go
working_dir: $GOPATH/src/workflow/project
environment: #some env variables
ports:
- "80:80"
db:
image: postgres
environment: #some env variables
volumes:
- # some paths
ports:
- "5432:5432"
Dockerfile:
FROM golang:latest
WORKDIR $GOPATH/src/workflow/project
CMD ["/bin/bash"]
I am a newbie with docker so any comments on how to do things idiomatically are appreciated
docker-compose does not combine docker images into one, it runs (with up) or builds then runs (with up --build) docker containers based on the images defined in the yml file.
More info are in the official docs
Compose is a tool for defining and running multi-container Docker applications.
so, in your example, docker-compose will run two containers:
1 - based on the go configurations
2 - based on the db configurations
to see what containers are actually running, use the command:
docker ps -a
for more info see docker docs
It is always recommended to run each searvice on a separate container, but if you insist to make an image which has both golangand postrges, you can take a postgres base image and install golang on it, or the other way around, take golangbased image and install postgres on it.
The installation steps can be done inside the Dockerfile, please refer to:
- postgres official Dockerfile
- golang official Dockerfile
combine them to get both.
Edit: (digital ocean deployment)
Well, if you copy every thing (docker images and the yml file) to your droplet, it should bring the application up and running similar to what happens when you do the same on your local machine.
An example can be found here: How To Deploy a Go Web Application with Docker and Nginx on Ubuntu 18.04
In production, usually for large scale/traffic applications, more advanced solutions are used such as:
- docker swarm
- kubernetes
For more info on Kubernetes on digital ocean, please refer to the official docs
hope this helps you find your way.

Docker compose - external links fail after successfull restart

the situation is this:
I have three different dockers compose files for three different projects: a frontend, a middleware, and a backend. FE is Ember, middleware and backend spring (boot). Which should not matter here though. Middleware uses an external_link to the backend, and frontend (UI) is using an external_link to middleware.
When I start with a clean docker (docker stop $(docker ps -aq), docker rm $(docker ps -aq)), everything works fine: I start the backend with docker-compose up, then middleware, then frontend. Everything is nice all external links do work (also running Cypress e2e tests on this setup - works fine).
Now, when I change something in the middleware, rebuild the image, stop the container (control+c) and restart it using docker-compose up, and then try to restart the frontend (control + c and then docker-compose up), docker will tell me:
Starting UI ... error
ERROR: for UI Cannot start service ui: Cannot link to a non running container: /32f2db8e96a1_middleware AS /ui/backend
ERROR: for UI Cannot start service ui: Cannot link to a non running container: /32f2db8e96a1_middleware AS /ui/backend
ERROR: Encountered errors while bringing up the project.
Now what irritates me:
where is the "32f2db8e96a1" coming from? The middleware container name is set to "middleware", which is also used in the external link of the UI, and works fine for every clean startup (meaning, using docker rm "-all" before). Also, docker ps shows me that a container for the middleware is actually running.
Unfortunately, I cannot post the compose files here, but I am willing to add any info needed.
Running on Docker version 18.09.0, build 4d60db4
Ubuntu 18.04.1 LTS
I would like to restart any of these containers without a broken external link. How do I achieve this?
Since you guys take time for me, I took time to clear out two of the compose. This is the UI/frontend one:
files:
version: '2.1'
services:
ui:
container_name: x-ui
build:
dockerfile: Dockerfile
context: .
image: "xxx/ui:latest"
external_links:
- "middleware:backend"
ports:
- "127.0.0.1:4200:80"
network_mode: bridge
This is the middleware:
version: '2.1'
services:
middleware:
container_name: x-middleware
image: xxx/middleware:latest
build:
dockerfile: src/main/docker/middleware/Dockerfile
context: .
ports:
- "127.0.0.1:8080:8080"
- "127.0.0.1:9003:9000"
external_links:
- "api"
network_mode: "bridge"
The "api" one is essentially the same as middleware.
Please note: I removed volumes and environment. Also I renamed, so that the error message names will not fit perfectly. Please note the naming schema is the same: service name goes like "middleware", container name uses a prefix "x-middleware".

Docker composed services can't communicate by service name

tldr: I can't communicate with a docker composed service by its service name in order to make requests to an api running in networked containers.
I have a single page application that makes requests to a json api. Its Dockerfile looks like this:
FROM nginx:alpine
COPY dist /usr/share/nginx/html
EXPOSE 80
A build process does it's thing and puts all the static assets in a dist directory which is then copied to the html directory of the nginx web server.
I have a mock json api powered by json-server. Its Dockerfile looks like this:
FROM node:7.10.0-alpine
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install
COPY . /usr/src/app
EXPOSE 3000
CMD [ "npm", "start" ]
I have a docker-compose file that looks like this:
version: '2'
services:
badass-ui:
image: mydocker-hub/badass-ui
container_name: badass-ui
ports:
- "80:80"
badderer-api:
image: mydocker-hub/badderer-api
container_name: badderer-api
ports:
- "3000:3000"
I'm able to build both containers successfully, and am able to run "docker-compose up" with both containers running smoothly. Fetch requests from badass-ui to badderer-api:3000/users returns "net::ERR_NAME_NOT_RESOLVED". Fetch requests to http://192.168.99.100:3000/users (or whatever the container IP may be) work fine. I thought by using docker compose I would be able to reference the name of a service defined in docker-compose.yml as a domain name, and that would enable communication between the containers via domain name. This doesn't seem to work. Is there something wrong with my docker-compose.yml? I'm on Windows 10 Home edition, using the tools that come with the Docker Quickstart terminal for Windows. I'm using docker-compose version 1.13.0, docker version 17.05.0-ce, docker-machine version 0.11.0 and VirtualBox 5.1.20.
Since you are using docker-compose.yml version 2, links should not be necessary. Containers within a compose network should be able to resolve other compose containers by service name.
Reading the comments on your question it seems like the networking and host name resolution works, so it seems like the problem is in your web UI. I don't see you passing any type of configuration to the UI application saying where to find the api. Maybe there is a hard coded url to the api in your UI causing the error?
Edit:
Is your UI a client side/javascript app? Are you sure the app isn't actually making the call from your browser? Your browser running on your local machine and not in docker will not be able resolve the badderrer-api hostname.

Development workflow for server and client using Docker Compose?

I'm developing a server and its client simultaneously and I'm designing them in Docker containers. I'm using Docker Compose to link them up and it works just fine for production but I can't figure out how to make it work with a development workflow in which I've got a shell running for each one.
My docker-compose-devel.yml:
server:
image: node:0.10
client:
image: node:0.10
links:
- server
I can do docker-compose up client or even docker-compose run client but what I want is a shell running for both server and client so I can make rapid changes to both as I develop iteratively.
I want to be able to do docker-compose run server bash in one window and docker-compose run --no-deps client bash in another window. The problem with this is that no address for the server is added to /etc/hosts on the client because I'm using docker-compose run instead of up.
The only solution I can figure out is to use docker run and give up on Docker Compose for development. Is there a better way?
Here's a solution I came up with that's hackish; please let me know if you can do better.
docker-compose-devel.yml:
server:
image: node:0.10
command: sleep infinity
client:
image: node:0.10
links:
- server
In window 1:
docker-compose --file docker-compose-dev.yml up -d server
docker exec --interactive --tty $(docker-compose --file docker-compose-dev.yml ps -q server) bash
In window 2:
docker-compose --file docker-compose-dev.yml run client bash
I guess your main problem is about restarting the application when there are changes in the code.
Personnaly, I launch my applications in development containers using forever.
forever -w -o log/out.log -e log/err.log app.js
The w option restarts the server when there is a change in the code.
I use a .foreverignore file to exclude the changes on some files:
**/.tmp/**
**/views/**
**/assets/**
**/log/**
If needed, I can also launch a shell in a running container:
docker exec -it my-container-name bash
This way, your two applications could restart independently without the need to launch the commands yourself. And you have the possibility to open a shell to do whatever you want.
Edit: New proposition considering that you need two interactive shells and not simply the possibility to relaunch the apps on code changes.
Having two distinct applications, you could have a docker-compose configuration for each one.
The docker-compose.yml from the "server" app could contain this kind of information (I added different kind of configurations for the example):
server:
image: node:0.10
links:
- db
ports:
- "8080:80"
volumes:
- ./src:/src
db:
image: postgres
environment:
POSTGRES_USER: dev
POSTGRES_PASSWORD: dev
The docker-compose.yml from the "client" app could use external_links to be able to connect to the server.
client:
image: node:0.10
external_links:
- project_server_1:server # Use "docker ps" to know the name of the server's container
ports:
- "80:80"
volumes:
- ./src:/src
Then, use docker-compose run --service-ports service-name bash to launch each configuration with an interactive shell.
Alternatively, the extra-hosts key may also do the trick by calling the server app threw a port exposed on the host machine.
With this solution, each docker-compose.yml file could be commited in the repository of the related app.
First thing to mention, for development environment you want to utilize volumes from docker-compose to mount your app to the container when it's started (at the runtime). Sorry if you're already doing it and I mention this, but it's not clear from your definition of docker-compose.yml
To answer your specific question - start your containers normally, then when doing docker-compose ps, you'll see a name of your container. For example 'web_server' and 'web_client' (where web is the directory of your docker-compose.yml file or name of the project).
When you got name of the container you want to connect to, you can run this command to run bash exactly in the container that's running your server:
docker exec -it web_server bash.
If you want to learn more about setting up development environment for reasonably complex app, checkout this article on development with docker-compose

Resources