Cypress Error: Failed to get 'appData' path - docker

I'm getting below error when I run the cypress tests in Jenkins pipeline, but works fine locally. Should I set the appData in Dockerfile ? Is election missing write access ?
+ cypress run --browser chrome
[1m[47m[31mA JavaScript error occurred in the main process
[30mUncaught Exception:
**Error: Failed to get 'appData' path**
at App.c._setDefaultAppPaths (electron/js2c/browser_init.js:5:1300)
at Object.<anonymous> (electron/js2c/browser_init.js:185:2485)
at Object../lib/browser/init.ts (electron/js2c/browser_init.js:185:3714)
at __webpack_require__ (electron/js2c/browser_init.js:1:128)
Here's the Dockerfile.
ENV CI=1
ENV QT_X11_NO_MITSHM=1
ENV _X11_NO_MITSHM=1
ENV _MITSHM=0
# should be root user
RUN echo "whoami: $(whoami)"
RUN npm config -g set user $(whoami)
# command "id" should print:
# uid=0(root) gid=0(root) groups=0(root)
# which means the current user is root
RUN id
# point Cypress at the /root/cache no matter what user account is used
ENV CYPRESS_CACHE_FOLDER=/root/.cache/Cypress
RUN npm install -g "cypress#6.3.0"
RUN cypress verify
# Cypress cache and installed version
# should be in the root user's home folder
RUN cypress cache path
RUN cypress cache list
# give every user read access to the "/root" folder where the binary is cached
# we really only need to worry about the top folder, fortunately
RUN chmod 755 /root
CMD [ "cypress", "run"]
Here's the stage in jenkins;
stage('e2e cypress testing') {
steps {
withDockerContainer(args: '-v $PWD:/e2e -w /e2e', image: '<image name>') {
// some block
sh 'cypress-run'
}
}
}
Versions;
npm version: 6.14.11
Cypress package version: 6.3.0
Cypress binary version: 6.3.0
Electron version: 11.2.0
Bundled Node version: 12.18.3

Related

Permission issue while building Docker image with Jenkins Pipeline

While building the image in Jenkins gradle build fails with the error
ERROR: JAVA_HOME is set to an invalid directory: /opt/java/openjdk
Please set the JAVA_HOME variable in your environment to match the
location of your Java installation.
Following is the part of the Dockerfile. The RUN gradle build is what fails.
FROM gradle:7.4.2-jdk8 as builder
COPY --chown=gradle:gradle . /home/gradle/src
WORKDIR /home/gradle/src
RUN gradle build --no-daemon
What I have checked
That the path is correct /opt/java/openjdk
https://hub.docker.com/layers/gradle/library/gradle/jdk8-jammy/images/sha256-8fe6aa6c268162cbb00e0873e94e8c8a49aea1d3bdf7a3c7499751f227f5dfc6?context=explore
What fails is the following gradle check : https://github.com/marklogic-community/ml-gradle/blob/9816f8756e8a6c656cb2371a4d9f85405e39e6d8/gradlew#L73
if [ ! -x "$JAVACMD" ] ; then
die "ERROR: JAVA_HOME is set to an invalid directory: $JAVA_HOME
Please set the JAVA_HOME variable in your environment to match the
location of your Java installation."
fi
It builds perfectly fine locally when do I skaffold build with local profile. But fails in Jenkins.
So the path exists so I am not sure why the -x check fails and only in Jenkins. It is executable by the user and group -> gradle:1000:1000 - that comes with the image : gradle:7.4.2-jdk8
I would appreciate any insight to this issue. Thank you.
same issue with TeamCity agent. It runs docker-in-docker.
In container (DockerInDocker) test -x $JAVA_HOME/bin/java returns 1.
In agent test -x $JAVA_HOME/bin/java returns 0.
0777 mode, root owner as well.
Also seeing the same issue in Jenkins running on k8s. Using eclipse-temurin:11.0.15_10-jdk as the base image. Checking the Java executable before gradlew is called give me:
13:34:46 Step 9/23 : RUN ls -la /opt/java/openjdk/bin/java
13:34:46 ---> Running in d7a82558e4b2
13:34:47 -rwxr-xr-x 1 root root 12768 Apr 19 21:38 /opt/java/openjdk/bin/java
but when I test for executable perms I get:
13:24:57 Step 10/22 : RUN test -x $JAVA_HOME/bin/java
13:24:57 ---> Running in 20dd8d832464
13:24:57 The command '/bin/sh -c test -x $JAVA_HOME/bin/java' returned a non-zero code: 1
It looks like commands are being run as root as well:
13:19:06 Step 10/21 : RUN id -u -n
13:19:06 ---> Running in 1ea36050bc88
13:19:06 root
What makes it weirder is that I'm able to manually create the same Jenkins pod used for builds, exec in and clone the repo and build the Docker image successfully with no issues.

The input device is not a TTY when I'm using docker? [duplicate]

This question already has answers here:
Error "The input device is not a TTY"
(16 answers)
docker error on windows : the input device is not a TTY. If you are using mintty, try prefixing the command with 'winpty' [duplicate]
(17 answers)
Closed 2 years ago.
I'm currently running the following command:
docker run -it -v $PWD:/e2e -w /e2e cypress/included:6.2.1
Error Message:
+ docker run -it -v %cd%:/e2e -w /e2e cypress/included:6.2.1
the input device is not a TTY
I'm pulling the cypress container from the Cypress Github Account
My bitbucket-pipelines.yaml file:
image: atlassian/default-image:2
pipelines:
default:
- step:
services:
- docker
script:
- docker run -it -v %cd%:/e2e -w /e2e cypress/included:6.2.1
My dockerfile:
FROM cypress/browsers:node12.18.3-chrome87-ff82
ENV CI=1
ENV QT_X11_NO_MITSHM=1
ENV _X11_NO_MITSHM=1
ENV _MITSHM=0
# should be root user
RUN echo "whoami: $(whoami)"
RUN npm config -g set user $(whoami)
# command "id" should print:
# uid=0(root) gid=0(root) groups=0(root)
# which means the current user is root
RUN id
# point Cypress at the /root/cache no matter what user account is used
# see https://on.cypress.io/caching
ENV CYPRESS_CACHE_FOLDER=/root/.cache/Cypress
RUN npm install -g "cypress#6.2.1"
RUN cypress verify
# Cypress cache and installed version
# should be in the root user's home folder
RUN cypress cache path
RUN cypress cache list
RUN cypress info
RUN cypress version
# give every user read access to the "/root" folder where the binary is cached
# we really only need to worry about the top folder, fortunately
RUN ls -la /root
RUN chmod 755 /root
# always grab the latest NPM and Yarn
# otherwise the base image might have old versions
RUN npm i -g yarn#latest npm#latest
# should print Cypress version
# plus Electron and bundled Node versions
RUN cypress version
RUN echo " node version: $(node -v) \n" \
"npm version: $(npm -v) \n" \
"yarn version: $(yarn -v) \n" \
"debian version: $(cat /etc/debian_version) \n" \
"user: $(whoami) \n" \
"chrome: $(google-chrome --version || true) \n" \
"firefox: $(firefox --version || true) \n"
ENTRYPOINT ["cypress", "run"]
If I run this command:
docker run -it -v %cd%:/e2e -w /e2e cypress/included:6.2.1
It will tell me that it cannot find my son file, and I don't why. Could someone help me a little bit?
What should I have to do? & Why I'm getting this issue, I guess I don't understand what is TTY means?

"logname: no login name" inside Docker container when running dpkg -i

I need to install an SDK package inside an Ubuntu 18.04 Docker container, but am constantly running into this problem:
theuser#e9fa4f39e0f0:/src/spinnaker$ sudo dpkg -i libspinnaker_2.2.0.48_arm64.deb
(Reading database ... 52013 files and directories currently installed.)
Preparing to unpack libspinnaker_2.2.0.48_arm64.deb ...
Unpacking libspinnaker (2.2.0.48) over (2.2.0.48) ...
logname: no login name
dpkg: warning: old libspinnaker package post-removal script subprocess returned error exit status 1
dpkg: trying script from the new package instead ...
logname: no login name
dpkg: error processing archive libspinnaker_2.2.0.48_arm64.deb (--install):
new libspinnaker package post-removal script subprocess returned error exit status 1
logname: no login name
dpkg: error while cleaning up:
new libspinnaker package post-removal script subprocess returned error exit status 1
Errors were encountered while processing:
libspinnaker_2.2.0.48_arm64.deb
I've tried all manner of workarounds, setting USER, SUDO_USER, LOGNAME, running the container with the "-u" switch to my uid/gid and all get the same logname error. Is there a work around for this?
I had the same problem with the latest spinnaker api release.
The issue is that postinst call logname to find out where your home directory is, to install some config files. In the docker build context, there is no logged in user.
My egregious hack was to overwrite the logname executable with "echo root".
e.g.:
# Install spinnaker sdk https://www.flir.com/support-center/iis/machine-vision/downloads/spinnaker-sdk-and-firmware-download/
COPY external/spinnaker/* spinnaker/
# Pre-answer the apt install prompts
COPY spinnaker.dat .
RUN cat spinnaker.dat >> /var/cache/debconf/config.dat
# Fake out logname (no login context in docker build)
RUN echo "echo root" > /usr/bin/logname
# Install other postinst dependencies
RUN DEBIAN_FRONTEND=noninteractive apt install -y iputils-ping wget
RUN DEBIAN_FRONTEND=noninteractive apt install -y --no-install-recommends ./spinnaker/lib*.deb && rm -rv spinnaker
The contents of spinnaker.dat (to avoid being prompted from the preinst script) are:
Name: libspinnaker/accepted-flir-eula
Template: libspinnaker/accepted-flir-eula
Value: true
Owners: libspinnaker
Flags: seen
Name: libspinnaker/error-flir-eula
Template: libspinnaker/error-flir-eula
Owners: libspinnaker
Name: libspinnaker/present-flir-eula
Template: libspinnaker/present-flir-eula
Value:
Owners: libspinnaker
Flags: seen

How to run an sh script in docker file?

When running a sh script in docker file, i got the following error:
./upload.sh: 5: ./upload.sh: sudo: not found ./upload.sh: 21:
./upload.sh: Bad substitution
sudo chmod 755 upload.sh # line 5
version=$(git rev-parse --short HEAD)
echo "version $version"
echo "Uploading file"
for path in $(find public/files -name "*.txt"); do
echo "path $path"
WORDTOREMOVE="public/"
echo "WORDTOREMOVE $WORDTOREMOVE"
# cause of the error
newpath=${path//$WORDTOREMOVE/} # Line 21
echo "new path $path"
url=http://localhost:3000/${newpath}
...
echo "Uploading file"
...
done
DockerFile
FROM node:10-slim
EXPOSE 3000 4001
WORKDIR /prod/code
...
COPY . .
RUN ./upload.sh
RUN npm run build
CMD ./DockerRun.sh
Any idea?
If anyone faces the same issue, here how I fixed it
chmod +x upload.sh
git update-index --chmod=+x upload.sh (mandatory if you pushed the file to remote branch before changing its permission)
The docker image you are using (node:10-slim) has no sudo installed on it because this docker image runs processes as user root:
docker run -it node:10-slim bash
root#68dcffceb88c:/# id
uid=0(root) gid=0(root) groups=0(root)
root#68dcffceb88c:/# which sudo
root#68dcffceb88c:/#
When your Dockerfile runs RUN ./upload.sh it will run:
sudo chmod 755 upload.sh
Using sudo inside the docker fails because sudo is not installed, there is no need to use sudo inside the docker because all of the commands inside the docker run as user root.
Simply remove the sudo from line number 5.
If you wish to update the running PATH variable run:
PATH=$PATH:/directorytoadd/bin
This will append the directory "/directorytoadd/bin" to the current path.

wercker with docker switching user results in error, how to install nvm then?

Problem
My wercker build exits with Failed step: setup environment - Command exited with exit code: 1 when I'm switching user in my Docker image. I'm running wercker dev from the commandline. The Dockerfile builds fine with Docker itself on the commandline, as well as on Docker Hub. I can run it fine. It's just when I use it for wercker, that the error occurs.
For example in my Dockerfile is the following code:
# Adding user
RUN adduser --disabled-password --gecos '' dockworker && adduser dockworker sudo && echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
RUN mkdir -p /home/dockworker && chown -R dockworker:dockworker /home/dockworker
USER dockworker # Line the build seems to break on
When I comment this line out, it seems to pass. Now the problem with this, for me, is the following: I'd like to switch to another user, since I'm trying to install nvm (for gulp, bower). Generally I don't prefer to install this this as root, therefore I add a user for this.
Workaround?
However, when I do install nvm as root in my Dockerfile (so just removing the user related lines in the codeblock above completely):
ENV NODE_VERSION 0.12.7
ENV NVM_DIR /usr/local/nvm
# NVM
RUN curl https://raw.githubusercontent.com/creationix/nvm/v0.25.4/install.sh | NVM_DIR=/usr/local/nvm bash
#install the specified node version and set it as the default one, install the global npm packages
RUN . /usr/local/nvm/nvm.sh && nvm install $NODE_VERSION && nvm alias default $NODE_VERSION && npm install -g bower && npm install -g gulp
Then it does get past the setup environment stage, but during the steps it errors out that nvm and npm are not found. The step in the wercker.yml:
box:
id: francobolli/docker-ubuntu-14.04-php-5.6
tag: latest
env:
NVM_DIR: /usr/local/nvm
dev:
steps:
- script:
name: gulp styles and javascript
code: |
npm install
bower install --allow-root
gulp --env=production
I don't really understand this. When I run both docker images from the commandline (so with wercker removed from the context completely) I can execute nvm and npm just fine, but when I'm running it through wercker, it seems the .bashrc file is not being executed. When I cat ~/.bashrc during the steps, I can see:
export NVM_DIR="/usr/local/nvm"
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh" # This loads nvm
Workaround!
When I enter this in a step, it will be executed and I can npm install without a problem, so it seems this is never executed through the .bashrc:
...
- script:
name: gulp styles and javascript
code: |
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh" # It works when I put it here, but it's also in ~/.bashrc, which doesn't seem to get executed
npm install
...
Note: If I source ~/.bashrc in the wercker step instead, it does not work.
Question
So my question is: What am I doing wrong, for not being able to switch user in the Wercker build and even if I could, would I have the same problem as running nvm with root: nvm and npm CAN be found when a Docker container is instantiated from the commandline, but CAN'T be found when running it with Wercker. What's the best solution?
I'd rather not add commands in the wercker.yml if it can be resolved through proper user configuration or proper nvm configuration. Sorry if I'm missing something very obvious.
This has nothing to do with Docker configuration, but with how Wercker handles Docker boxes. From the documentation:
Using Sudo
The sudo command is no longer supported in wercker v2 and effectively does nothing when used.
And for deployment:
Please note that if you update a project to make use of Docker (Ewok version) and this project has autodeployment, this deploy will most likely fail. We will update our documentation in the future on how to deploy these containers.
However, I did get it to build (and deploy) with the solution (temporary workaround?) as displayed in the original question.

Resources