How to properly run Docker containers and YARN server with WSL 2 in Windows 10? - docker

The question may be poorly formed, but let me explain.
Initial setup
I had my web-dev Laravel project in C:\dev\gitlab.our-company.com\laravel-backend with a Vue app in .\public\vue-frontend.
I use Docker (using WSL 2 based engine) with the following commands:
// First time
export AUTH_TOKEN='glpat-XXX'
export COMPOSER_AUTH='{"http-basic":{"gitlab.our-company.com": {"username": "oauth2", "password": "${AUTH_TOKEN}"}}}'
docker build -f deployment/app.Dockerfile --build-arg COMPOSER_AUTH -t laravel_backend_app .
docker build -f deployment/web.Dockerfile --build-arg AUTH_TOKEN -t laravel_backend_web .
// Always
docker-compose -f deployment/docker-compose.yml -f deployment/docker-compose.override.yml -p laravel_backend up
Then I open container's bash with
docker-compose -f deployment/docker-compose.yml -f deployment/docker-compose.override.yml -p laravel_backend exec app bash
which in turn allows me to run php artisan test.
I can also run
yarn --cwd ./public/vue-frontend/ install && yarn --cwd ./public/vue-frontend/ serve
to serve the frontend.
Why I made changes
Running php artisan test or vendor/bin/grumphp run in container's bash was ridiculously slow (x10).
Current setup
$ wsl docker --version
Docker version 20.10.12, build e91ed57
Using Ubuntu for Windows with explorer.exe . I copied entire C:\dev to (Ubuntu)~/dev
I rebuilt Docker containers and up-ed them, executed app bash and tried php artisan test. It is now lightning fast.
Question 1
I can do the procedure in the last paragraph in 2 ways.
I can open VS Code integrated terminal in //wsl$/Ubuntu/home/{USER}/dev/gitlab.our-company.com/laravel-backend and do the above procedure directly there, or
as an extra step, when in terminal, I can execute wsl command and my location becomes ~/dev/gitlab.our-company.com/laravel-backend before continuing with said procedure.
In both cases, I get my php artisan test improvement speed.
Which is correct, or for some reason better?
Problem
I had issues with yarn, so in Ubuntu for Windows I installed nvm and then with it nodejs, npm, yarn.
Yarn is installed in both cases, thought a tad different version.
//wsl$/Ubuntu/home/{USER}/dev/gitlab.our-company.com/laravel-backend (205-issue-title)
$ yarn --version
1.22.17
$ wsl
{USER}#DESKTOP-NAME:~/dev/gitlab.our-company.com/laravel-backend$ yarn --version
1.22.15
Now the problem is serving the app.
//wsl$/Ubuntu/home/{USER}/dev/gitlab.our-company.com/laravel-backend (205-issue-title)
$ yarn --cwd ./public/vue-frontend/ install && yarn --cwd ./public/vue-frontend/ serve
yarn install v1.22.17
[1/4] Resolving packages...
[2/4] Fetching packages...
[3/4] Linking dependencies...
[4/4] Building fresh packages...
[-/3] ⡀ waiting...
[2/3] ⡀ ejs
error \\wsl$\Ubuntu\home\{USER}\dev\gitlab.our-company.com/laravel-backend\public\vue-frontend\node_modules\yorkie: Command failed.
Exit code: 1
Command: node bin/install.js
Arguments:
Directory: \\wsl$\Ubuntu\home\{USER}\dev\gitlab.our-company.com/laravel-backend\public\vue-frontend\node_modules\yorkie
Output:
'\\wsl$\Ubuntu\home\{USER}\dev\gitlab.our-company.com/laravel-backend\public\vue-frontend\node_modules\yorkie'
CMD.EXE was started with the above path as the current directory.
UNC paths are not supported. Defaulting to Windows directory.
node:internal/modules/cjs/loader:936
throw err;
^
Error: Cannot find module 'C:\Windows\bin\install.js'
at Function.Module._resolveFilename (node:internal/modules/cjs/loader:933:15)
at Function.Module._load (node:internal/modules/cjs/loader:778:27)
at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12)
at node:internal/main/run_main_module:17:47 {
code: 'MODULE_NOT_FOUND',
requireStack: []
or with wsl:
{USER}#DESKTOP-3OMP0G1:~/dev/gitlab.our-company.com/laravel-backend$ yarn --cwd ./public/vue-frontend/ install && yarn --cwd ./public/vue-frontend/ serve
yarn install v1.22.15
[1/4] Resolving packages...
[2/4] Fetching packages...
error An unexpected error occurred: "https://gitlab.our-company.com/api/v4/projects/{PROJECT_ID}/packages/npm/#our-company/case-messaging/-/#our-company/case-messaging-1.0.3.tgz: Request failed \"404 Not Found\"".
info If you think this is a bug, please open a bug report with the information provided in "/home/{USER}/dev/gitlab.our-company.com/laravel-backend/public/vue-frontend/yarn-error.log".
info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command.

I now believe the second way ie. in wsl is the right way.
If anyone has a deeper, better or different explanation I will review it, and likely mark it as the accepted answer.
The problem I had with running yarn install in wsl is somewhat unrelated to the question and was caused by an NPM package that resides in Our Company's self hosted GitLab (as opposed to NPM public registry) and npm had no access to download the case-messaging package.
Solved it with
npm config set #our-company:registry https://gitlab.our-company.com/api/v4/projects/{PROJECT_ID}/packages/npm/
npm config set -- '//gitlab.our-company.com/api/v4/projects/{PROJECT_ID}/packages/npm/:_authToken' "${AUTH_TOKEN}"

Related

WARNING: Ignoring http://dl-cdn.alpinelinux.org/alpine/v3.9/main: No such file or directory

I configured deis workflow in aws eks cluster. after that created deis apps and deployed in deis local repository by,
git push test test:master
when deploying, docker file is executed. here is my docker file
FROM mhart/alpine-node:12
#FROM ubuntu:18.04
ARG SOURCE_VERSION=na
ENV SOURCE_VERSION=$SOURCE_VERSION
RUN apk add --no-cache -X http://dl-cdn.alpinelinux.org/alpine/v3.9 --update bash && rm -rf /var/cache/apk/*
#apt-get update &&\
#apt-get install -y make gcc wget
WORKDIR /app
ADD . .
RUN npm install
EXPOSE 3200
CMD ["node", "app.js"]
this results error like,
ERROR: http://dl-cdn.alpinelinux.org/alpine/v3.9/main: temporary error (try again later)
WARNING: Ignoring http://dl-cdn.alpinelinux.org/alpine/v3.9/main: No such file or directory
ERROR: http://dl-cdn.alpinelinux.org/alpine/v3.9/community: temporary error (try again later)
WARNING: Ignoring http://dl-cdn.alpinelinux.org/alpine/v3.9/community: No such file or directory
ERROR: unable to select packages:
bash (no such package):
required by: world[bash]
The command '/bin/sh -c apk add --update bash && rm -rf /var/cache/apk/*' returned a non-zero code: 1
remote: 2021-11-15 13:30:22.569253 I | Error running git receive hook [Build pod exited with code 1, stopping build]
To ssh://deis-builder.app-test.paceup.io:2222/pu-api-gateway.git
! [remote rejected] test -> master (pre-receive hook declined)
error: failed to push some refs to 'ssh://git#deis-builder.app-test.paceup.io:2222/pu-api-gateway.git'
I am totally new to docker, deis and eks. if anyone can help it would be grateful
Finally found the answer is that we have configured nodegroup setup in amazon linux which didn't support this deployment. we changed the nodegroup for eks optimized ubuntu and deployed the app using docker and working fine.
Edit:
This is working in some of the Linux versions. In my case it's working on EKS version 1.9 but not working in EKS version 2.0 and above.
This error may come due to DNS issue also while building the docker image pus the dns flag and mention google dn 8.8.8.8. Or edit the resolv.conf and add the nameserver 8.8.8.8 in the container
I hope this may help
I had this problem when my machine had many symptoms of a network configuration problem:
A Dockerfile that had to download zip files from the net could not do this anymore and threw the warning in question which stopped the build. I could download the zip files when entering the URL:s in the browser instead, it was a problem of the container. I checked the same Dockerfile on another healthy machine and the build ran through.
I had lost the connection to the internal dns server. I could not ping another machine by its name anymore, but had to use its internal IP, although the day before, the ping had worked.
I could see any GCP project items only in Firefox incognito mode.
Answer insofar is: change the machine and test whether it does not work only on your machine. If that is true, the workaround is already done. As the next step, try to fix any other network problems, and it is likely that this will get rid of the warning.
UPDATE: The problem was a running container that gave my machine its own network. When I ran docker-compose down, the network worked again. When I removed the network from the docker-compose file, the download from inside the container worked again, the warning in question was gone.

"logname: no login name" inside Docker container when running dpkg -i

I need to install an SDK package inside an Ubuntu 18.04 Docker container, but am constantly running into this problem:
theuser#e9fa4f39e0f0:/src/spinnaker$ sudo dpkg -i libspinnaker_2.2.0.48_arm64.deb
(Reading database ... 52013 files and directories currently installed.)
Preparing to unpack libspinnaker_2.2.0.48_arm64.deb ...
Unpacking libspinnaker (2.2.0.48) over (2.2.0.48) ...
logname: no login name
dpkg: warning: old libspinnaker package post-removal script subprocess returned error exit status 1
dpkg: trying script from the new package instead ...
logname: no login name
dpkg: error processing archive libspinnaker_2.2.0.48_arm64.deb (--install):
new libspinnaker package post-removal script subprocess returned error exit status 1
logname: no login name
dpkg: error while cleaning up:
new libspinnaker package post-removal script subprocess returned error exit status 1
Errors were encountered while processing:
libspinnaker_2.2.0.48_arm64.deb
I've tried all manner of workarounds, setting USER, SUDO_USER, LOGNAME, running the container with the "-u" switch to my uid/gid and all get the same logname error. Is there a work around for this?
I had the same problem with the latest spinnaker api release.
The issue is that postinst call logname to find out where your home directory is, to install some config files. In the docker build context, there is no logged in user.
My egregious hack was to overwrite the logname executable with "echo root".
e.g.:
# Install spinnaker sdk https://www.flir.com/support-center/iis/machine-vision/downloads/spinnaker-sdk-and-firmware-download/
COPY external/spinnaker/* spinnaker/
# Pre-answer the apt install prompts
COPY spinnaker.dat .
RUN cat spinnaker.dat >> /var/cache/debconf/config.dat
# Fake out logname (no login context in docker build)
RUN echo "echo root" > /usr/bin/logname
# Install other postinst dependencies
RUN DEBIAN_FRONTEND=noninteractive apt install -y iputils-ping wget
RUN DEBIAN_FRONTEND=noninteractive apt install -y --no-install-recommends ./spinnaker/lib*.deb && rm -rv spinnaker
The contents of spinnaker.dat (to avoid being prompted from the preinst script) are:
Name: libspinnaker/accepted-flir-eula
Template: libspinnaker/accepted-flir-eula
Value: true
Owners: libspinnaker
Flags: seen
Name: libspinnaker/error-flir-eula
Template: libspinnaker/error-flir-eula
Owners: libspinnaker
Name: libspinnaker/present-flir-eula
Template: libspinnaker/present-flir-eula
Value:
Owners: libspinnaker
Flags: seen

How to install SSHFS inside Alpine container?

I use php:7.3.2-cli-alpine3.9 as my base image. I also need SSHFS installed inside container since PHP library I use relies on it. I know there are many answers with "if you install SSHFS inside container you are doing it wrong" but in my case I need this software installed inside container not on host.
In my Dockerfile I have
RUN apk update && apk add sshfs;
This command is executed without errors:
fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/community/x86_64/APKINDEX.tar.gz
v3.9.3-21-g265a28802e [http://dl-cdn.alpinelinux.org/alpine/v3.9/main]
v3.9.3-15-g583c0d55e9 [http://dl-cdn.alpinelinux.org/alpine/v3.9/community]
OK: 9764 distinct packages available
(1/12) Installing openssh-keygen (7.9_p1-r4)
(2/12) Installing openssh-client (7.9_p1-r4)
(3/12) Installing fuse-common (3.2.6-r1)
(4/12) Installing fuse3 (3.2.6-r1)
(5/12) Installing libffi (3.2.1-r6)
(6/12) Installing libintl (0.19.8.1-r4)
(7/12) Installing libuuid (2.33-r0)
(8/12) Installing libblkid (2.33-r0)
(9/12) Installing libmount (2.33-r0)
(10/12) Installing pcre (8.42-r1)
(11/12) Installing glib (2.58.1-r2)
(12/12) Installing sshfs (3.5.1-r0)
Executing busybox-1.29.3-r10.trigger
Executing glib-2.58.1-r2.trigger
OK: 25 MiB in 44 packages
But when I am trying to mount remote host
'/usr/bin/sshfs' -C -o reconnect -o 'Port=22' -o 'UserKnownHostsFile=/ssh/known_hosts' -o StrictHostKeyChecking=yes -o 'IdentityFile=/ssh/<FILE>' -o PasswordAuthentication=no '<USER>#<HOST>:/' '/fuse/'
I am getting:
fuse: device not found, try 'modprobe fuse' first
When I am running it I am getting:
modprobe: can't change directory to '/lib/modules': No such file or directory
Am I missing any packages? What else needs to be installed?
Thanks.
In order to run SSHFS inside container it requires privileged permissions.
Install SSHFS by adding this line in Dockerfile:
RUN apk update && apk add sshfs;
Run container:
docker run --privileged=true -it --rm --name alpine-app transfers-image

Installing packages into ubuntu14.04 docker container

I currently have similar images being built for virtualbox and digital ocean for dev and production (they're using packer and ansible to build). They're using Ubuntu 14.04.
I've created a docker version from the same scripts without any issue. This is going to be for a Gitlab CI environment.
When I come to install packages inside a container I get an error. Potentially to do with broken init systems? Something not running?
My initial command is /sbin/init and I've tried with and without phusion/base-image.
The error is msg: '/usr/bin/apt-get -y -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold" install 'docker-engine'' failed: invoke-rc.d: unknown initscript, /etc/init.d/cgroup-lite not found.
dpkg: error processing package cgroup-lite (--configure):
(Yes, this is going to be a monolithic container rather than single-process and yes, I'm running docker from inside it - I'll be sharing docker.sock to make this work.)
So, I had a look at the code for invoke-rd.d and found this relevant snippet.
# If we're running on upstart and there's an upstart job of this name, do
# the rest with upstart instead of calling the init script.
if which initctl >/dev/null && initctl version | grep -q upstart \
&& [ -e "$UPSTARTDIR/${INITSCRIPTID}.conf" ]
then
is_upstart=1
elif test ! -f "${INITDPREFIX}${INITSCRIPTID}" ; then
## Verifies if the given initscript ID is known
## For sysvinit, this error is critical
printerror unknown initscript, ${INITDPREFIX}${INITSCRIPTID} not found.
if [ ! -e "$UPSTARTDIR/${INITSCRIPTID}.conf" ]; then
# If the init script doesn't exist, but the upstart job does, we
# defer the error exit; we might be running in a chroot and
# policy-rc.d might say not to start the job anyway, in which case
# we don't want to exit non-zero.
exit 100
fi
fi
A combination of docker replacing the init system, the inability to use upstart in an ubuntu docker container and the ubuntu package for cgroup-lite being built for upstart meant dpkg --configure was failing as the service couldn't be started.

wercker with docker switching user results in error, how to install nvm then?

Problem
My wercker build exits with Failed step: setup environment - Command exited with exit code: 1 when I'm switching user in my Docker image. I'm running wercker dev from the commandline. The Dockerfile builds fine with Docker itself on the commandline, as well as on Docker Hub. I can run it fine. It's just when I use it for wercker, that the error occurs.
For example in my Dockerfile is the following code:
# Adding user
RUN adduser --disabled-password --gecos '' dockworker && adduser dockworker sudo && echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
RUN mkdir -p /home/dockworker && chown -R dockworker:dockworker /home/dockworker
USER dockworker # Line the build seems to break on
When I comment this line out, it seems to pass. Now the problem with this, for me, is the following: I'd like to switch to another user, since I'm trying to install nvm (for gulp, bower). Generally I don't prefer to install this this as root, therefore I add a user for this.
Workaround?
However, when I do install nvm as root in my Dockerfile (so just removing the user related lines in the codeblock above completely):
ENV NODE_VERSION 0.12.7
ENV NVM_DIR /usr/local/nvm
# NVM
RUN curl https://raw.githubusercontent.com/creationix/nvm/v0.25.4/install.sh | NVM_DIR=/usr/local/nvm bash
#install the specified node version and set it as the default one, install the global npm packages
RUN . /usr/local/nvm/nvm.sh && nvm install $NODE_VERSION && nvm alias default $NODE_VERSION && npm install -g bower && npm install -g gulp
Then it does get past the setup environment stage, but during the steps it errors out that nvm and npm are not found. The step in the wercker.yml:
box:
id: francobolli/docker-ubuntu-14.04-php-5.6
tag: latest
env:
NVM_DIR: /usr/local/nvm
dev:
steps:
- script:
name: gulp styles and javascript
code: |
npm install
bower install --allow-root
gulp --env=production
I don't really understand this. When I run both docker images from the commandline (so with wercker removed from the context completely) I can execute nvm and npm just fine, but when I'm running it through wercker, it seems the .bashrc file is not being executed. When I cat ~/.bashrc during the steps, I can see:
export NVM_DIR="/usr/local/nvm"
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh" # This loads nvm
Workaround!
When I enter this in a step, it will be executed and I can npm install without a problem, so it seems this is never executed through the .bashrc:
...
- script:
name: gulp styles and javascript
code: |
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh" # It works when I put it here, but it's also in ~/.bashrc, which doesn't seem to get executed
npm install
...
Note: If I source ~/.bashrc in the wercker step instead, it does not work.
Question
So my question is: What am I doing wrong, for not being able to switch user in the Wercker build and even if I could, would I have the same problem as running nvm with root: nvm and npm CAN be found when a Docker container is instantiated from the commandline, but CAN'T be found when running it with Wercker. What's the best solution?
I'd rather not add commands in the wercker.yml if it can be resolved through proper user configuration or proper nvm configuration. Sorry if I'm missing something very obvious.
This has nothing to do with Docker configuration, but with how Wercker handles Docker boxes. From the documentation:
Using Sudo
The sudo command is no longer supported in wercker v2 and effectively does nothing when used.
And for deployment:
Please note that if you update a project to make use of Docker (Ewok version) and this project has autodeployment, this deploy will most likely fail. We will update our documentation in the future on how to deploy these containers.
However, I did get it to build (and deploy) with the solution (temporary workaround?) as displayed in the original question.

Resources