Docker EROFS: read-only file system, unlink webpack-dev-server - docker

Some of my docker containers don't want to be up.
when I run sudo docker-compose up
I get this error :
client_1 | EROFS: read-only file system, unlink '/usr/src/client/node_modules/webpack-dev-server/ssl/server.pem'
client_1 | error Command failed with exit code 1.*
I tried to insert "dependenciesMeta": {"webpack-dev-server": {"unplugged": true}} in my package.json but nothing has changed.
I've also tried to add to my Package.json "engines": {"yarn": "x.x"},

Related

How to properly create a tar archive to import with docker

I need to extract the filesystem of a debian image onto the host, modify it, then repackage it back into a docker image. I'm using the following commands:
docker export container_name > archive.tar
tar -xf archive.tar -C debian/
modifying the file system here
tar -cpjf archive-modified.tar debian/
docker import archive-modified.tar debian-modified
docker run -it debian-modified /bin/bash
After I try to run the new docker image I get the following error:
docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "/bin/bash": stat /bin/bash: no such file or directory: unknown.
ERRO[0000] error waiting for container: context canceled
I've tried the above steps without modifying the file system at all and I get the same behavior. I've also tried importing the output of docker export directly, and this works fine. This probably means I'm creating the new tar archive incorrectly. Can anyone tell me what I'm doing wrong?
Take a look at the archive generated by docker export:
# tar tf archive.tar | sort | head
bin/
bin/bash
bin/cat
bin/chgrp
bin/chmod
bin/chown
bin/cp
bin/dash
bin/date
bin/dd
And then at the archive you generate with your tar -cpjf ... command:
# tar tf archive-modified.tar | sort | head
debian/
debian/bin/
debian/bin/bash
debian/bin/cat
debian/bin/chgrp
debian/bin/chmod
debian/bin/chown
debian/bin/cp
debian/bin/dash
debian/bin/date
You've moved everything into a debian/ top-level directory, so there is no /bin/bash in the image (it would be /debian/bin/bash, and probably wouldn't work anyway because your shared libraries aren't in the expected location, either.
You probably want to create the updated archive like this:
# tar -cpjf archive-modified.tar -C debian/ .

Issue regarding the deployment of a Meteor Application on Google Cloud App Engine: APP_CONTAINER_CRASHED

TL;DR
Clone this: https://github.com/calvan-liang/radgrad2googlecloudissue.
Ensure you have meteor-google-cloud and gcloud CLI installed. If not:
On Powershell:
npm install meteor-google-cloud -g
On Ubuntu Terminal:
echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg]
https://packages.cloud.google.com/apt cloud-sdk main" | sudo tee -a
/etc/apt/sources.list.d/google-cloud-sdk.list
sudo apt-get install apt-transport-https ca-certificates gnupg
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key --
keyring /usr/share/keyrings/cloud.google.gpg add -
sudo apt-get update && sudo apt-get install google-cloud-sdk
meteor-google-cloud --init
If not successful or not Ubuntu: https://cloud.google.com/sdk/install
To deploy, in the app directory:
meteor-google-cloud --settings deploy/settings.json --app deploy/app.yml --
docker deploy/Dockerfile
What is the cause of the APP_CONTAINER_CRASHED and how can it be resolved?
.
.
.
.
.
Currently following the README.md from https://github.com/EducationLink/meteor-google-cloud to deploy a pre-existing project using Google Cloud. On the fourth step of deploy. While the default service is updating, I receive this error:
ERROR: (gcloud.app.deploy) Error Response: [9]
Application startup error! Code: APP_CONTAINER_CRASHED
/app/programs/server/node_modules/fibers/future.js:313
throw(ex);
^
MongoNetworkError: failed to connect to server [bla.com:27017] on first connect [MongoNetworkError: connection timed out
at connectionFailureError (/app/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb/lib/core/connection/connect.js:406:14)
at Socket.<anonymous> (/app/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb/lib/core/connection/connect.js:294:16)
at Object.onceWrapper (events.js:417:28)
at Socket.emit (events.js:311:20)
at Socket.EventEmitter.emit (domain.js:482:12)
at Socket._onTimeout (net.js:478:8)
at listOnTimeout (internal/timers.js:549:17)
at processTimers (internal/timers.js:492:7) {
name: 'MongoNetworkError',
[Symbol(mongoErrorContextSymbol)]: {}
}]
at Pool.<anonymous> (/app/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb/lib/core/topologies/server.js:438:11)
at Pool.emit (events.js:311:20)
at Pool.EventEmitter.emit (domain.js:482:12)
at /app/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb/lib/core/connection/pool.js:561:14
at /app/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb/lib/core/connection/pool.js:994:11
at /app/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb/lib/core/connection/connect.js:31:7
at callback (/app/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb/lib/core/connection/connect.js:264:5)
at Socket.<anonymous> (/app/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb/lib/core/connection/connect.js:294:7)
at Object.onceWrapper (events.js:417:28)
at Socket.emit (events.js:311:20)
at Socket.EventEmitter.emit (domain.js:482:12)
at Socket._onTimeout (net.js:478:8)
at listOnTimeout (internal/timers.js:549:17)
at processTimers (internal/timers.js:492:7) {
name: 'MongoNetworkError',
[Symbol(mongoErrorContextSymbol)]: {}
}
Pre-existing project that is used is cloned from https://github.com/radgrad/radgrad2. I added a deploy directory in radgrad2/app. Inside the deploy directory, there are these files:
Dockerfile
FROM gcr.io/google_appengine/nodejs
RUN install_node {{ nodeVersion }}
RUN npm install npm#{{ npmVersion }}
RUN node -v
RUN npm -v
COPY . /app/
RUN (cd programs/server && npm install --unsafe-perm)
CMD node main.js
app.yml
runtime: custom
service: default
env: flex
threadsafe: true
zones:
- us-west3-b
- us-west3-c
resources:
cpu: 1
memory_gb: 1
disk_size_gb: 20
network:
session_affinity: true
automatic_scaling:
max_num_instances: 2
skip_files:
- ^(.*/)?\.dockerignore$
- ^(.*/)?\yarn-error.log$
- ^(.*/)?\.git$
- ^(.*/)?\.hg$
- ^(.*/)?\.svn$
settings.json
{
"public": {},
"private": {},
"meteor-google-cloud": {
"project": "radgrad2test",
"stop-previous-version": "",
"env_variables": {
"MONGO_URL": "mongodb://user:pw#bla.com",
"ROOT_URL": "https://example.de"
}
}
}
Note that I am running this on Windows 10 Home using WSL 2 with Docker Desktop.
What may be possibly causing the app container to crash? How could I resolve this issue or where should I be looking to find the origin of this problem?
MongoNetworkError: failed to connect to server
This was a bug that has just recently being fixed in the newest Meteor version:
In some MongoDB host providers like ScaleGrid and IBM Cloud some developers are getting this error because of certificate
MongoNetworkError: failed to connect to server [sg-meteorappdb-32194.servers.mongodirector.com:27017] on first connect [Error: self signed certificate
So in order to fix this, there has is now the option to configure Mongo options through your Meteor settings.json file. It is now added to the documentation:
"packages": {
"mongo": {
"options": {
"tls": true,
"tlsCAFileAsset": "certificate.pem"
}
}
}
Thanks for the suggestion. I added the above to the settings.json file and unfortunately it did not recognize ''certificate.pem"
ERROR: (gcloud.app.deploy) Error Response: [9]
Application startup error! Code: APP_CONTAINER_CRASHED
/app/programs/server/node_modules/fibers/future.js:280
throw(ex);
^
Error: ENOENT: no such file or directory, open '/app/programs/server/assets/app/certificate.pem'
at Object.openSync (fs.js:457:3)
at Object.readFileSync (fs.js:359:35)
at /app/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb/lib/operations/connect.js:243:32
at Array.forEach (<anonymous>)
at resolveTLSOptions (/app/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb/lib/operations/connect.js:241:34)
at /app/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb/lib/operations/connect.js:294:5
at parseConnectionString (/app/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb/lib/core/uri_parser.js:685:3)
at connect (/app/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb/lib/operations/connect.js:272:3)
at /app/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb/lib/mongo_client.js:215:5
at maybePromise (/app/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb/lib/utils.js:719:3)
at MongoClient.connect (/app/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb/lib/mongo_client.js:211:10)
at Function.MongoClient.connect (/app/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb/lib/mongo_client.js:421:22)
at new MongoConnection (packages/mongo/mongo_driver.js:206:11)
at new MongoInternals.RemoteCollectionDriver (packages/mongo/remote_collection_driver.js:4:16)
at Object.<anonymous> (packages/mongo/remote_collection_driver.js:38:10)
at Object.defaultRemoteCollectionDriver (packages/underscore.js:784:19) {
errno: -2,
syscall: 'open',
code: 'ENOENT',
path: '/app/programs/server/assets/app/certificate.pem'
}
What could I do to resolve this issue? I ensured that I installed the latest version of meteor.
Allowing invalid certificates does not work either. I run into the same MongoNetworkError: failed to connect to server [bla.com:27017] on first connect [MongoNetworkError: connection timed out problem if I allow invalid certificates.

Error processing tar file (exit status 1): unexpected EOF when building with docker-compose while data directory exists

My docker-compose.yml looks like this:
version: '3'
services:
phab:
build:
context: .
args:
- PHAB_BASE_URI=https://phab.example.com
- PHAB_REPO_PATH=/var/repo
- PHAB_TIMEZONE=Europe/Berlin
- PHP_POST_MAX_SIZE=32MB
ports:
- "127.0.0.1:8012:80"
volumes:
- ./.data/repos:/var/repo
- ./.data/mysql:/var/lib/mysql/
If I try to rebuild after I started the container, I get
$ docker-compose build
Building phab
ERROR: Error processing tar file(exit status 1): unexpected EOF
This appears to be due to the .data directory. The only "cure" I found was either deleting the directory or moving it outside of the project directory. Renaming the directory to eg. .data1 does not fix it.
$ sudo mv .data .data1
$ docker-compose build
Building phab
ERROR: Error processing tar file(exit status 1): unexpected EOF
$ sudo mv .data1 ..
$ docker-compose build
Building phab
Step 1/27 : FROM tutum/lamp:latest
---> 3d49e175ec00
Step 2/27 : RUN apt-get update && apt-get install -y php5-curl php5-mysqlnd php5-gd python3-pygments
---> Using cache
[ ... ]
I am using docker-compose 1.18.0, build 8dd22a9 abd Docker 18.06.0-ce, build 0ffa825 on Debian 9.5.
I have seen the question Docker ERROR: Error processing tar file(exit status 1): unexpected EOF. However just flushing the /var/lib/docker directory does not appear as an option to me. Pruning unused images and even removing the base image before the build does not fix the issue.
I had the same issue. The following steps solved it:
1.) Stop Docker Service.
systemctl stop docker
2.) Backup /var/lib/docker.
3.) Remove /var/lib/docker.
sudo rm -rf /var/lib/docker
4.) Start Docker Service.
systemctl start docker
Try upgrading to 18.09 which was just released yesterday. The "unexpected EOF" during a build looks like a known issue with 18.06: https://github.com/moby/moby/pull/37771
remove files generated by other container, like "db_data", "mysql_data", etc

Sharing files between two containers

For couple of hours I am struggling with docker compose. I am building angular app. And I could see the files in the dist directory. Now I want to share these files with the nginx container. I thought the shared volume will do it. But when I add
services:
client:
volumes:
- static:/app/client/dist
nginx:
volumes:
- static:share/user/nginx/html
volumes:
static:
an try docker-compose up --build
I got this error
client_1 | EBUSY: resource busy or locked, rmdir '/app/client/dist'
client_1 | Error: EBUSY: resource busy or locked, rmdir '/app/client/dist'
client_1 | at Object.fs.rmdirSync (fs.js:863:18)
client_1 | at rmdirSync (/app/client/node_modules/fs-extra/lib/remove/rimraf.js:276:13)
client_1 | at Object.rimrafSync [as removeSync] (/app/client/node_modules/fs-extra/lib/remove/rimraf.js:252:7)
client_1 | at Class.run (/app/client/node_modules/#angular/cli/tasks/build.js:29:16)
client_1 | at Class.run (/app/client/node_modules/#angular/cli/commands/build.js:250:40)
client_1 | at resolve (/app/client/node_modules/#angular/cli/ember-cli/lib/models/command.js:261:20)
client_1 | at new Promise (<anonymous>)
client_1 | at Class.validateAndRun (/app/client/node_modules/#angular/cli/ember-cli/lib/models/command.js:240:12)
client_1 | at Promise.resolve.then.then (/app/client/node_modules/#angular/cli/ember-cli/lib/cli/cli.js:140:24)
client_1 | at <anonymous>
client_1 | npm ERR! code ELIFECYCLE
client_1 | npm ERR! errno 1
client_1 | npm ERR! app#0.0.0 build: `ng build --prod`
client_1 | npm ERR! Exit status 1
client_1 | npm ERR!
client_1 | npm ERR! Failed at the app#0.0.0 build-prod script.
client_1 | npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
Any help is fully appreciated
I believe this is, as the error suggests, a deadlock situation. Your docker-compose file has 2 services that start approximately, if not, simultaneously. Both of them has some sort of hold on the Docker volume (named, "static"). When Angular executes ng build, by default, --deleteOutputPath is set to true. And when it attempts to delete the output directory, the error message that you received occurs.
If deleteOutputPath is set to false the issue should be resolved. Perhaps that is sufficient for your needs. If not, as an alternative, having the --outputPath set to a temp directory within the project directory and after Angular builds, move the contents into the Docker volume. If the temp directory path is out/dist and the volume maps to dist this can be used:
ng build && cp -rf ./out/dist/* ./dist
However, that alternative solution is really just working around the issue. To make note, the docker-compose depends_on key will not help in this situation as it simply signifies a dependency and nothing to do with "readiness" of the dependent service.
And also to make note, executing docker volume rm <name> will have no consequences as a solution here. Remember, both services have a hold on the volume as one is trying to remove it.
Just a thought, haven't tested, as another alternative solution is to delete the contents within the output path. And set the deleteOutputPath to false, since Angular seems to be attempting to delete the directory itself.
Update:
So removing the contents in the output path seems to work! As I mentioned, set deleteOutputPath to false. And in your package.json file, in the scripts object, having something similar to this:
{
"scripts": {
"build:production": "rm -rf ./dist/* && ng build --configuration production",
}
}
You can try to solve it without using named volumes:
services:
client:
volumes:
- ./static-content:client/app/dist
nginx:
volumes:
- ./static-content:share/user/nginx/html

Dockerized Ionic app hot reload not working

I had an existing Ionic app which I have dockerized. The build and up commands are successful and I can access the app at http://localhost:8100/ionic-lab. However, hot reload doesn't work. Whenever I edit an HTML or CSS, those changes are nor reflected.
My dockerfile:
FROM node:8
COPY package.json /opt/library/
WORKDIR /opt/library
RUN npm install -g cordova ionic && cordova telemetry off
# && echo n | ionic start dockerized-ionic-app --skip-npm --v2 --ts
RUN npm install && npm cache verify
COPY . /opt/library
#CMD ["ionic", "serve", "--all"]
And docker-compose.yml:
app:
build: .
ports:
- '8100:8100'
- '35729:35729'
volumes:
- .:/opt/library
- /opt/library/node_modules
command: ionic serve --lab
Why is it happening? What is missing?
UPDATE:
Output of docker-compose build --no-cache
D:\Development\personal_projects\library>docker-compose build --no-cache
Building app
Step 1/6 : FROM node:8
---> b87c2ad8344d Step 2/6 : COPY package.json /opt/library/
---> 4422d0333b92
Step 3/6 : WORKDIR /opt/library
Removing intermediate container 1cfdd60477f9 ---> 1ca3dc5f5bd6 Step 4/6 : RUN npm install -g cordova ionic && cordova telemetry off
---> Running in d7e9bf4e6d7b
/usr/local/bin/cordova -> /usr/local/lib/node_modules/cordova/bin/cordova
/usr/local/bin/ionic -> /usr/local/lib/node_modules/ionic/bin/ionic
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents#1.1.3 (node_modules/ionic/node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents#1.1.3: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})
+ cordova#8.0.0
+ ionic#3.19.1
added 660 packages in 29.173s
You have been opted out of telemetry. To change this, run: cordova telemetry on.
Removing intermediate container d7e9bf4e6d7b
---> 3fedee0878af
Step 5/6 : RUN npm install && npm cache verify
---> Running in 8d482b23f6bb
> node-sass#4.5.3 install /opt/library/node_modules/node-sass
> node scripts/install.js
Downloading binary from https://github.com/sass/node-sass/releases/download/v4.5.3/linux-x64-57_binding.node
Download complete
Binary saved to /opt/library/node_modules/node-sass/vendor/linux-x64-57/binding.node
Caching binary to /root/.npm/node-sass/4.5.3/linux-x64-57_binding.node
> uglifyjs-webpack-plugin#0.4.6 postinstall /opt/library/node_modules/uglifyjs-webpack-plugin
> node lib/post_install.js
> node-sass#4.5.3 postinstall /opt/library/node_modules/node-sass
> node scripts/build.js
Binary found at /opt/library/node_modules/node-sass/vendor/linux-x64-57/binding.node
Testing binary
Binary is fine
npm notice created a lockfile as package-lock.json. You should commit this file.
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents#1.1.3 (node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents#1.1.3: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})
added 548 packages in 30.281s
Cache verified and compressed (~/.npm/_cacache):
Content verified: 1476 (55779072 bytes)
Index entries: 2306
Finished in 9.736s
Removing intermediate container 8d482b23f6bb
---> 5815e391f2c6
Step 6/6 : COPY . /opt/library
---> 5cc9637a678c
Successfully built 5cc9637a678c
Successfully tagged library_app:latest
D:\Development\personal_projects\library>
And output of docker-compose up:
D:\Development\personal_projects\library>docker-compose up
Recreating library_app_1 ... done
Attaching to library_app_1
Starting app-scripts server: --address 0.0.0.0 --port 8100 --livereload-port 35729 --dev-logger-port 53703 --nobrowser --lab - Ctrl+C to cancel
app_1 | [14:45:19] watch started ...
app_1 | [14:45:19] build dev started ...
app_1 | [14:45:19] clean started ...
app_1 | [14:45:19] clean finished in 78 ms
app_1 | [14:45:19] copy started ...
app_1 | [14:45:19] deeplinks started ...
app_1 | [14:45:20] deeplinks finished in 60 ms
app_1 | [14:45:20] transpile started ...
app_1 | [14:45:24] transpile finished in 4.54 s
app_1 | [14:45:24] preprocess started ...
app_1 | [14:45:24] preprocess finished in 1 ms
app_1 | [14:45:24] webpack started ...
app_1 | [14:45:24] copy finished in 5.33 s
app_1 | [14:45:31] webpack finished in 6.73 s
app_1 | [14:45:31] sass started ...
app_1 | [14:45:32] sass finished in 1.46 s
app_1 | [14:45:32] postprocess started ...
app_1 | [14:45:32] postprocess finished in 40 ms
app_1 | [14:45:32] lint started ...
app_1 | [14:45:32] build dev finished in 13.64 s
app_1 | [14:45:32] watch ready in 13.73 s
app_1 | [14:45:32] dev server running: http://localhost:8100/
app_1 |
[OK] Development server running!
app_1 | Local: http://localhost:8100
app_1 | External: http://172.17.0.2:8100
app_1 | DevApp: library#8100 on 1643dcb6c0d7
app_1 |
app_1 | [14:45:35] lint finished in 2.51 s
Your Dockerfile and Docker-Compose does exactly what is needed.
With the - .:/opt/library line the volume gets mounted correctly and your local changes will take effect in the container as well.
If you are on Windows the problem is that Hyper-V is not capable of propagating local file changes correctly into the container. Therefore the serve program is not able to catch file changes.
The solution is to use ng serve directly and enable polling by running ng serve with the poll flag: ng serve --poll 200 --host=0.0.0.0 --port=8100.
--poll 200 is looking actively for file changes every 200ms
--host=0.0.0.0 set the host. 0.0.0.0 is used to be reachable from other containers
--port=8100 is used to get the same port as ionic serve uses (just for convinience)
You said "hot reload doesn't work", this is correct.
if you re-build docker container then only you will see code changes, because your source code needs to get copy inside your docker-container.
just run docker-compose up -d or rebuild docker container then you should see your code changes.
You are mapping local 8100 port with cointainer 8100 port, this is ok. You are running ionic from a container, in an External way.
Try with “ionic serve --external”

Resources