TestCafe: Chromium: Error: Unable to establish one or more of the specified browser connections - docker

When running the E2E tests in the CI (BitBucket) using chromium:headless, they break with following error:
Error: Unable to establish one or more of the specified browser connections. This can be caused by network issues or remote device failure.
at BrowserSet._waitConnectionsOpened (/opt/atlassian/pipelines/agent/build/node_modules/testcafe/src/runner/browser-set.js:83:30)
at Promise.resolve.then (/opt/atlassian/pipelines/agent/build/node_modules/testcafe/src/runner/browser-set.js:106:35)
Here comes the weird part, the CI run the custom E2E pipeline at 9th November 2019 at 11:34pm, then at 10th November 2019 at 11:34pm it starts failing, exact same code and it was running inside the same dockerfile as the 9th Nov one.
What I have done?
I tried updating TestCafe to latest version - 1.6.1, not working
Updating Gherking-Testcafe to latest version - 2.4.2
Tried running with:
‘chromium ----no-sandbox’
‘chromium:headless ----no-sandbox’
With no success
Tried running firefox:headless, a lot of tests starts too fail, might have to dig into why they are failing...
Updated the docker container with newer versions of everything, same error
Asked TestCafe to list the browsers and Chromium is in the list
+ npx testcafe --list-browsers
firefox
chromium
The Docker file:
# using debian:jessie for it's smaller size over ubuntu
FROM debian:jessie
# Replace shell with bash so we can source files
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
# Set environment variables
ENV appDir /var/www/app/current
# Run updates and install deps
RUN echo "deb http://packages.linuxmint.com debian import" >> /etc/apt/sources.list
RUN apt-get update
RUN apt-get install -y -q --force-yes \
python python-pip chromium chromium-l10n firefox xvfb curl wget\
&& pip install --upgrade awscli s3cmd python-magic \
&& rm -rf /var/lib/apt/lists/* \
&& apt-get -y autoclean
ENV NVM_DIR /usr/local/nvm
ENV NODE_VERSION 10.8.0
# Install NPM packages
RUN curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.29.0/install.sh | bash \
&& source $NVM_DIR/nvm.sh \
&& nvm install $NODE_VERSION \
&& npm install -g serve \
&& nvm alias default $NODE_VERSION \
&& nvm use default
# Set up our PATH correctly so we don't have to long-reference npm, node, &c.
ENV NODE_PATH $NVM_DIR/versions/node/v$NODE_VERSION/lib/node_modules
ENV PATH $NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH
EXPOSE 1337 1338
I had to re-do our old one, because Gulp was creating issues with latest Node, so, needed a way to control the node version using NVM
Bug logged on TestCafes GitHub's page: https://github.com/DevExpress/testcafe/issues/4489

I was able to get to work by changing:
chromium:headless to 'chromium --headless --no-sandbox'
It is still weird to me how it worked one day and within 24 with the same source code, docker image, it broke! Still curious to see what the TestCafe team will find, they are tracking the issue here:
https://github.com/DevExpress/testcafe/issues/4489#issuecomment-555061246

why ----no-sandbox? Had similar issues this week with Bitbucket Pipelines where I forgot to add --no-sandbox, but adding as written seemed to fix the issue...
Also, adding a .testcafe.json file to specify your options makes writing the options a lot easier...

Related

Ubuntu 20.04 packages missing all mo-files in docker image

For some reason, iso-codes package does not install it's files inside docker image.
Here is what as consider more or less minimal Dockerfile:
FROM ubuntu:20.04
ENV TZ=Etc/UTC
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get upgrade -y && apt-get install -y locales
RUN sed -i -e 's/# en_US.UTF-8 UTF-8/en_US.UTF-8 UTF-8/' /etc/locale.gen && \
dpkg-reconfigure --frontend=noninteractive locales && \
update-locale LANG=en_US.UTF-8
ENV LANG en_US.UTF-8
ENV LC_ALL=en_US.UTF-8
RUN apt-get update && apt-get install -y iso-codes
RUN ls /usr
I have left locale-related settings in case those are relevant. The same problem appears when I comment all but FROM and RUN apt-get update && apt-get install -y iso-codes out.
Building:
docker build -t 'mytry:1' .
Now when I run the following, I do not see anything in the directory where mo-files should reside:
docker run --cidfile /tmp/docker_test.cid 'mytry:1' ls -R /usr/share/locale/en/LC_MESSAGES/
However, dpkg -l shows it's there:
ii iso-codes 4.4-1 all ISO language, territory, currency, script codes and their translations
And dpkg -L has some files in the directory:
/usr/share/locale/en/LC_MESSAGES
/usr/share/locale/en/LC_MESSAGES/iso_3166-2.mo
/usr/share/locale/en/LC_MESSAGES/iso_3166_2.mo
What is that I am missing? (I am using the specific docker run way just for simplicity. The same problem arises in the normal usage)
I also tried find / -name 'iso_3166-1.mo', but seems like there is no such file anywhere.
Also it seems like poedit-common, which also should have mo files, is missing them, so the problem is more general.
docker -v gives
Docker version 20.10.7, build 20.10.7-0ubuntu5~20.04.2
We have found the reason:
cat /etc/dpkg/dpkg.cfg.d/excludes
# Drop all man pages
path-exclude=/usr/share/man/*
# Drop all translations
path-exclude=/usr/share/locale/*/LC_MESSAGES/*.mo
# Drop all documentation ...
path-exclude=/usr/share/doc/*
# ... except copyright files ...
path-include=/usr/share/doc/*/copyright
# ... and Debian changelogs
path-include=/usr/share/doc/*/changelog.Debian.*
In order to get locales, one should comment out the path-exclude line for /usr/share/locale/... or replace the file. Before installing packages.
Of course, the size of the image can grow as a result.

Why can't I run command in Dockerfile but I can from within the my Docker container?

I have the following Dockerfile. This is what the "n" package is.
FROM ubuntu:18.04
SHELL ["/bin/bash", "-c"]
# Need to install curl, git, build-essential
RUN apt-get clean
RUN apt-get update
RUN apt-get install -y build-essential
RUN apt-get install -y curl
RUN apt-get install -y git
# Per docs, the following allows automated installation of n without installing node https://github.com/mklement0/n-install
RUN curl -L https://git.io/n-install | bash -s -- -y
# This refreshes the terminal to use "n"
RUN . /root/.bashrc
# Install node version 6.9.0
RUN /root/n/bin/n 6.9.0
This works perfectly and does everything I expect.
Unfortunately, after refreshing the terminal via RUN . /root/.bashrc, I can't seem to call "n" directly and instead I have to reference the exact binary using RUN /root/n/bin/n 6.9.0.
However, when I docker run -it container /bin/bash into the container and run the above sequence of commands, I am able to call "n" like so: Shell command: n 6.9.0 with no issues.
Why does the following command not work in the Dockerfile?
RUN n 6.9.0
I get the following error when I try to build my image:
/bin/bash: n: command not found
Each RUN command runs a separate shell and a separate container; any environment variables set in a RUN command are lost at the end of that RUN command. You must use the ENV command to permanently change environment variables like $PATH.
# Does nothing
RUN export FOO=bar
# Does nothing, if all the script does is set environment variables
RUN . ./vars.sh
# Needed to set variables
ENV FOO=bar
Since a Docker image generally only contains one prepackaged application and its runtime, you don't need version managers like this. Install the single version of the language runtime you need, or use a prepackaged image with it preinstalled.
# Easiest
FROM node:6.9.0
# The hard way
FROM ubuntu:18.04
ARG NODE_VERSION=6.9.0
ENV NODE_VERSION=NODE_VERSION
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install --assume-yes --no-install-recommends \
curl
RUN cd /usr/local \
&& curl -LO https://nodejs.org/dist/v${NODE_VERSION}/node-v${NODE_VERSION}-linux-x64.tar.xz \
&& tar xjf node-v${NODE_VERSION}-linux-x64.tar.xz \
&& rm node-v${NODE_VERSION}-linux-x64.tar.xz \
&& for f in node npm npx; do \
ln -s ../node-v${NODE_VERSION}-linux-x64/bin/$f bin/$f; \
done

How can we install google-chrome-stable on alpine image in dockerfile using dpkg?

I am trying to install google-chrome-stable on alpine image using dpkg. However, the dpkg is installed but it does not install google-chrome-stable and return this error instead? Is there a way to install google-chrome-stable in alpine image either using dpkg or other way?
dpkg: regarding google-chrome-stable_current_amd64.deb containing
google-chrome-stable:amd64, pre-dependency problem:
google-chrome-stable:amd64 pre-depends on dpkg (>= 1.14.0)
dpkg: error processing archive google-chrome-stable_current_amd64.deb (--install):
pre-dependency problem - not installing google-chrome-stable:amd64
Errors were encountered while processing:
Dockerfile:
# Base image
FROM ruby:2.6.3-alpine3.10
# Use node version 10.16.3, yarn version 1.16.0
RUN apk add --no-cache --repository http://dl-cdn.alpinelinux.org/alpine/v3.10/main/ nodejs=10.16.3-r0
RUN apk add --no-cache --repository http://dl-cdn.alpinelinux.org/alpine/v3.10/community/ yarn=1.16.0-r0
# Install dependencies
RUN apk upgrade
RUN apk --update \
add build-base \
git \
tzdata \
nodejs \
nodejs-npm \
bash \
curl \
yarn \
gzip \
postgresql-client \
postgresql-dev \
imagemagick \
imagemagick-dev \
imagemagick-libs \
chromium \
chromium-chromedriver \
ncurses \
less \
dpkg=1.19.7-r0 \
chromium \
chromium-chromedriver
RUN dpkg --add-architecture amd64
RUN wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
RUN dpkg -i google-chrome-stable_current_amd64.deb
# This is the base directory used in any
# further COPY, RUN and ENTRYPOINT commands
WORKDIR /webapp
# Copy Gemfile and Gemfile.lock and run bundle install
COPY Gemfile* /webapp/
RUN gem install bundler -v '1.17.3' && \
bundle _1.17.3_ install
# Copy everything to /webapp for docker image
COPY . ./
EXPOSE 3000
# Run the application
CMD ["rails", "server", "-b", "0.0.0.0"]
Installing the Chrome .deb file this way won't work on Alpine.
While the dpkg package is available in the Alpine repository, and is useful for installing lightweight Debian packages, you won't be able to use it for installing complex Debian packages, since it'll be impossible to satisfy many Debian dependencies. Alpine is generally not Debian compatible (relying on musl libc), so installing native Alpine packages using apk is the right way to go.
AFAIK, there's currently no Google Chrome Alpine Linux compatible, musl-libc build.
You could, however, install the Chromium browser, which is available using an apk package:
apk add chromium
Another option is enabling glibc on a vanilla Alpine image, making it compatible with Debian binaries. This is a fairly simple procedure, see: Dockerfile. However, it may not be suitable for images with existing applications such as ruby:2.6.3-alpine3.10. Moreover, even with a glibc setup on Alpine, Chrome is not likely to run without issues. I have made a quick attempt (Dockerfile) but couldn't get past the first segfault.
Edit 9/5/21: Running the debian compatible Chrome stable on Alpine is going to be a very difficult task to say the least. This is in part due to the very large number of dependencies and libraries. Trying to run it results with segfaults during dynamic linking and finally assertions from the dynamic linker. Even if we manage to get passed these issues and start Chrome it will probably be very unstable.
Since chromium-chromedriver is presented in your package list, I suppose that you want to do browser automation.As to browser automation, I used java and selenium, and download chromium binary and chromium driver binary manually.
What the most I want to tell you is that the chromium binary and chromium driver binary bundle might not work as expected, and you need to downgrade the version of either chrome driver or chrome and make several trial to find a matched versions that really work, no matter whether you use node.js or java selenium.
With Selenium, you have another option that you can deploy the chrome and chromedriver bundle as a http service in a different server, and make selenium invoke the remote chrome service.
ChromeDriver for version 93.0.4577.15

How do I install a specific version of Chrome in a Dockerfile?

Have a Dockerfile that installs from a python image and then I need it to install a specific (not latest) version of Google Chrome.
Here's what I have:
FROM python:3.6
# Tools
RUN apt-get update \
&& apt-get install -y vim less \
&& apt-get clean
# https://github.com/SeleniumHQ/docker-selenium/blob/master/NodeChrome/Dockerfile.txt
#============================================
# Google Chrome
#============================================
# can specify versions by CHROME_VERSION;
# e.g. google-chrome-stable=53.0.2785.101-1
# google-chrome-beta=53.0.2785.92-1
# google-chrome-unstable=54.0.2840.14-1
# latest (equivalent to google-chrome-stable)
# google-chrome-beta (pull latest beta)
#============================================
ARG CHROME_VERSION="google-chrome-stable"
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& echo "deb http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list \
&& apt-get update -qqy \
&& apt-get -qqy install \
${CHROME_VERSION:-google-chrome-stable} \
&& rm /etc/apt/sources.list.d/google-chrome.list \
&& rm -rf /var/lib/apt/lists/* /var/cache/apt/*
The Chrome installation steps were taken from here (as seen in the comments) and even using the version in the example I get the error
E: Version '53.0.2785.101-1' for 'google-chrome-stable' was not found
Tried other versions from https://chromereleases.googleblog.com/ and nothing works.
Do you know of a different way to install a specific version or if I'm doing something wrong with these steps?
This took me a while to find, but you're installing from Google's repository and they only keep the latest versions of Google Chrome in their repositories. You could probably search for 3rd party repositories that have older versions of Chrome, but I personally wouldn't recommend that.
The current version is 75.0.3770.100-1 for google-chrome-stable at the time of this post. Will that not work for you?
Lastly, I directly copied your dockerfile and it worked for me with the latest build of google-chrome-stable installed on the image. How were you running docker?
Here was my process:
Copied your docker file directly into ./Dockerfile
docker build ./
docker image ls
Copied my image id (90206843f24e in my case)
docker run --entrypoint "/bin/bash" -it 90206843f24e
You'll be dropped in a root shell on the docker image to "poke" around
run google-chrome -version to verify the above version is installed
I hope this works for you. Good luck and keep us posted!

"groupadd: Command not found" in docker container even though I install it and I am root

I have the below Dockerfile which I want to build. It's basically just the normal jboss/wildfly base image, but built with amazonlinux instead of centOS.
The build error's out with the line "groupadd: Command not found"
After this happened the first time I added the "epel" repo and tried installing it manually as you can see in the first RUN instruction. I have read a few forums and seems like sometimes you get that error message when you're not running as root. I did a "whoami" and I am running as root, so it shouldn't be an issue.
Anyone got any idea why I'm still getting an error?
FROM amazonlinux:2017.03
# Install packages necessary to run EAP
RUN yum-config-manager --enable epel && yum update -y && yum -y install groupadd xmlstarlet saxon augeas bsdtar unzip && yum clean all
# Create a user and group used to launch processes
# The user ID 1000 is the default for the first "regular" user on Fedora/RHEL,
# so there is a high chance that this ID will be equal to the current user
# making it easier to use volumes (no permission issues)
RUN groupadd -r jboss -g 1000 && useradd -u 1000 -r -g jboss -m -d /opt/jboss -s /sbin/nologin -c "JBoss user" jboss && \
chmod 755 /opt/jboss
# Set the working directory to jboss' user home directory
WORKDIR /opt/jboss
# Specify the user which should be used to execute all commands below
USER jboss
Thanks in advance!
Your problem is that groupadd is not a package, so you can't install it like you are attempting to do at the moment.
You can install shadow-utils.x86_64, which will make the groupadd command available.
yum install shadow-utils.x86_64 -y
Or to modify your "RUN" line:
RUN yum-config-manager --enable epel && yum update -y && yum -y install shadow-utils.x86_64 xmlstarlet saxon augeas bsdtar unzip && yum clean all
That should fix your issue.
You also don't need the epel repository, so you can remove that bit all together if you want.
In my case it's an issue of the mac M1.
When I use the compatibility mode docker build works:
export DOCKER_DEFAULT_PLATFORM=linux/amd64

Resources