I have a small fastify app that connects to an external Redis server.
I am using the fastify-redis npm package (which uses ioredis under the hood).
fastify-redis is connecting using rediss:// format
REDIS_URL='rediss://:xxxxyyyyxxxyxxxyxyxyxyxyxxy#blahblah-dev.redis.cache.windows.net:6380'
const Fastify = require('fastify')
const fastifyRedis = require('#fastify/redis')
fastify = Fastify({ logger: true, pluginTimeout: 50000 })
fastify.register(fastifyRedis, {
url: process.env.REDIS_URL,
enableAutoPipelining: true,
})
This all works fine run locally using npm start.
When I dockerise it, though, I get an error, which looks like it is caused by not being able to connect to the Redis instance
redisutils_1 | > node index.js
redisutils_1 |
redisutils_1 | /usr/src/node_modules/ioredis/built/redis/event_handler.js:175
redisutils_1 | self.flushQueue(new errors_1.MaxRetriesPerRequestError(maxRetriesPerRequest));
redisutils_1 | ^
redisutils_1 |
redisutils_1 | MaxRetriesPerRequestError: Reached the max retries per request limit (which is 20). Refer to "maxRetriesPerRequest" option for details.
redisutils_1 | at Socket.<anonymous> (/usr/src/node_modules/ioredis/built/redis/event_handler.js:175:37)
redisutils_1 | at Object.onceWrapper (node:events:628:26)
redisutils_1 | at Socket.emit (node:events:513:28)
redisutils_1 | at TCP.<anonymous> (node:net:313:12)
redisutils_1 |
redisutils_1 | Node.js v18.9.0
What have I missed?
you mostly will need to run your container with --network host as your container runnning inside private network and can't reach your network to communicate with any external services.
I discovered the issue.
2 things.
(The dumb one) make sure any environment variable settings (like the rediss URL) are actually being set!
The Redis server was refusing the connection due to certificate issues. I was using bullseye-slim and had to change to alpine and add a step to fix that
FROM node:alpine
RUN apk update && apk add ca-certificates && update-ca-certificates
WORKDIR /usr/src/app
COPY ["package.json", "package-lock.json*", "npm-shrinkwrap.json*", "./"]
RUN npm install --production --silent && mv node_modules ../
COPY . .
EXPOSE 3000
RUN chown -R node /usr/src/app
USER node
CMD ["npm", "start"]
Namely
RUN apk update && apk add ca-certificates && update-ca-certificates
Related
This is the dockerfile:
FROM node
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
EXPOSE 3001
CMD [ "npm","start" ]
this is the docker-compose file:
userPortal:
image: userportal:latest
ports:
- 3001:3001
links:
- apiServer
command: ["npm","start"]
this is the docker-compose ps:
localdeployment_userPortal_1 docker-entrypoint.sh npm start Up 0.0.0.0:3001->3001/tcp
this is the package-json:
"scripts": {
"start": "set PORT=3001 && react-scripts start",
...}
this is the container's logs:
userPortal_1 |
userPortal_1 | > user-portal#0.1.0 start
userPortal_1 | > set PORT=3001 && react-scripts start
userPortal_1 |
userPortal_1 | (node:31) [DEP_WEBPACK_DEV_SERVER_ON_AFTER_SETUP_MIDDLEWARE] DeprecationWarning: 'onAfterSetupMiddleware' option is deprecated. Please use the 'setupMiddlewares' option.
userPortal_1 | (Use `node --trace-deprecation ...` to show where the warning was created)
userPortal_1 | (node:31) [DEP_WEBPACK_DEV_SERVER_ON_BEFORE_SETUP_MIDDLEWARE] DeprecationWarning: 'onBeforeSetupMiddleware' option is deprecated. Please use the 'setupMiddlewares' option.
userPortal_1 | Starting the development server...
and this is what I get when I try to access localhost:3001
[1]: https://i.stack.imgur.com/lwaOR.png
When I use npm start without docker it works fine so it is not a proxy problem.
I am trying to run chromedp in docker.
My main.go:
package main
import (
"context"
"log"
"time"
"github.com/chromedp/chromedp"
)
func main() {
log.SetFlags(log.LstdFlags | log.Llongfile)
ctx, cancel := chromedp.NewContext(
context.Background(),
chromedp.WithLogf(log.Printf),
)
defer cancel()
// create a timeout
ctx, cancel = context.WithTimeout(ctx, 15 * time.Second)
defer cancel()
u := `https://www.whatismybrowser.com/detect/what-is-my-user-agent`
selector := `#detected_value`
log.Println("requesting", u)
log.Println("selector", selector)
var result string
err := chromedp.Run(ctx,
chromedp.Navigate(u),
chromedp.WaitReady(selector),
chromedp.OuterHTML(selector, &result),
)
if err != nil {
log.Fatal(err)
}
log.Printf("result:\n%s", result)
}
Dockerfile:
FROM golang:latest as build-env
RUN mkdir $GOPATH/src/app
WORKDIR $GOPATH/src/app
ENV GO111MODULE=on
COPY go.mod .
COPY go.sum .
COPY main.go .
RUN go mod download
RUN go build -o /root/app
FROM chromedp/headless-shell
COPY --from=build-env /root/app /
CMD ["/app"]
When I run it:
docker-compose build
docker-compose up
It outputs:
app_1 | [1129/192523.576726:WARNING:resource_bundle.cc(426)] locale_file_path.empty() for locale
app_1 | [1129/192523.602779:WARNING:resource_bundle.cc(426)] locale_file_path.empty() for locale
app_1 |
app_1 | DevTools listening on ws://0.0.0.0:9222/devtools/browser/3fa247e0-e2fa-484e-8b5f-172b392701bb
app_1 | [1129/192523.836854:WARNING:resource_bundle.cc(426)] locale_file_path.empty() for locale
app_1 | [1129/192523.838804:WARNING:resource_bundle.cc(426)] locale_file_path.empty() for locale
app_1 | [1129/192523.845866:ERROR:egl_util.cc(60)] Failed to load GLES library: /headless-shell/swiftshader/libGLESv2.so: /headless-shell/swiftshader/libGLESv2.so: cannot open shared object file: No such file or directory
app_1 | [1129/192523.871796:ERROR:viz_main_impl.cc(176)] Exiting GPU process due to errors during initialization
app_1 | [1129/192523.897083:WARNING:gpu_process_host.cc(1220)] The GPU process has crashed 1 time(s)
app_1 | [1129/192523.926741:WARNING:resource_bundle.cc(426)] locale_file_path.empty() for locale
app_1 | [1129/192523.930111:ERROR:egl_util.cc(60)] Failed to load GLES library: /headless-shell/swiftshader/libGLESv2.so: /headless-shell/swiftshader/libGLESv2.so: cannot open shared object file: No such file or directory
app_1 | [1129/192523.943794:ERROR:viz_main_impl.cc(176)] Exiting GPU process due to errors during initialization
app_1 | [1129/192523.948757:WARNING:gpu_process_host.cc(1220)] The GPU process has crashed 2 time(s)
app_1 | [1129/192523.950107:ERROR:browser_gpu_channel_host_factory.cc(138)] Failed to launch GPU process.
app_1 | [1129/192524.013014:ERROR:browser_gpu_channel_host_factory.cc(138)] Failed to launch GPU process.
So it doesn't run my go app. I expect that chromedp/headless-shell contains google-chrome and my golang app would successfully use it over github.com/chromedp/chromedp
Update 1
I added missing directories:
RUN mkdir -p /headless-shell/swiftshader/ \
&& cd /headless-shell/swiftshader/ \
&& ln -s ../libEGL.so libEGL.so \
&& ln -s ../libGLESv2.so libGLESv2.so
and now have the following output, my app still not running:
app_1 | [1202/071210.095414:WARNING:resource_bundle.cc(426)] locale_file_path.empty() for locale
app_1 | [1202/071210.112632:WARNING:resource_bundle.cc(426)] locale_file_path.empty() for locale
app_1 |
app_1 | DevTools listening on ws://0.0.0.0:9222/devtools/browser/86e31db1-3a17-4da6-9e2f-696647572492
app_1 | [1202/071210.166158:WARNING:resource_bundle.cc(426)] locale_file_path.empty() for locale
app_1 | [1202/071210.186307:WARNING:resource_bundle.cc(426)] locale_file_path.empty() for locale
Update 2
Looks like CMD ["/app"] doesn't run my main.go file because it doesn't print any lines from it.
And when I run it manually:
$ /usr/local/bin/docker exec -ti chromedp_docker_app_1 /bin/bash
root#0c417fd159a2:/# /app
2019/12/02 07:40:34 app is running
2019/12/02 07:40:34 /go/src/app/main.go:26: requesting https://www.whatismybrowser.com/detect/what-is-my-user-agent
2019/12/02 07:40:34 /go/src/app/main.go:27: selector #detected_value
2019/12/02 07:40:34 /go/src/app/main.go:35: exec: "google-chrome": executable file not found in $PATH
I see that google-chrome app is still not there, hmmm....
You are missing few things here, First you need to run google-headless-chrome inside your container. you can use following Dockerfile
FROM golang:1.12.0-alpine3.9
RUN apk update && apk upgrade && apk add --no-cache bash git && apk add --no-cache chromium
# Installs latest Chromium package.
RUN echo #edge http://nl.alpinelinux.org/alpine/edge/community >> /etc/apk/repositories \
&& echo #edge http://nl.alpinelinux.org/alpine/edge/main >> /etc/apk/repositories \
&& apk add --no-cache \
harfbuzz#edge \
nss#edge \
freetype#edge \
ttf-freefont#edge \
&& rm -rf /var/cache/* \
&& mkdir /var/cache/apk
RUN go get github.com/mafredri/cdp
CMD chromium-browser --headless --disable-gpu --remote-debugging-port=9222 --disable-web-security --safebrowsing-disable-auto-update --disable-sync --disable-default-apps --hide-scrollbars --metrics-recording-only --mute-audio --no-first-run --no-sandbox
I am using CDP, More robust and fun for me!
This is the link for CDP: https://github.com/mafredri/cdp
Is not pretty but here is a simple docker that worked for me
FROM golang:1.16.5 AS build-env
RUN apt update && apt -y upgrade
RUN apt -y install chromium
WORKDIR /app
ADD ./ ./
RUN go mod download
RUN go build -o /docker-gs-ping
CMD [ "/docker-gs-ping" ]
I'm trying to get a Rails 6 application to run in Docker. When the command rails server is executed from the dockerfile, I get an error.
remote: web_1_a59d968487d2 | warning Integrity check: System parameters don't match
remote: web_1_a59d968487d2 | error Integrity check failed
remote: web_1_a59d968487d2 | error Found 1 errors.
remote: web_1_a59d968487d2 |
remote: web_1_a59d968487d2 |
remote: web_1_a59d968487d2 | ========================================
remote: web_1_a59d968487d2 | Your Yarn packages are out of date!
remote: web_1_a59d968487d2 | Please run `yarn install --check-files` to update.
remote: web_1_a59d968487d2 | ========================================
remote: web_1_a59d968487d2 |
remote: web_1_a59d968487d2 |
remote: web_1_a59d968487d2 | To disable this check, please change `check_yarn_integrity`
remote: web_1_a59d968487d2 | to `false` in your webpacker config file (config/webpacker.yml).
remote: web_1_a59d968487d2 |
remote: web_1_a59d968487d2 |
remote: web_1_a59d968487d2 | yarn check v1.16.0
remote: web_1_a59d968487d2 | info Visit https://yarnpkg.com/en/docs/cli/check for documentation about this command.
In my config/webpacker.yml file I have this line:
development:
<<: *default
check_yarn_integrity: false
In my config/environments/development.rb:
config.webpacker.check_yarn_integrity = false
I am also building my node_modules as part of the docker setup (Dockerfile):
FROM ruby:2.6.3
RUN apt-get update && apt-get install apt-transport-https
RUN curl -sL https://deb.nodesource.com/setup_12.x | bash -
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
RUN apt-get update -qq && apt-get install -y nodejs yarn
RUN mkdir /myapp
WORKDIR /myapp
COPY Gemfile /myapp/Gemfile
COPY Gemfile.lock /myapp/Gemfile.lock
RUN bundle install
COPY . /myapp
RUN rm -Rf node_modules/
RUN rm yarn.lock
RUN yarn install
ENV RAILS_ENV=development
EXPOSE 3000
# Start the main process.
CMD ["rails", "server", "-b", "0.0.0.0"]
Running docker-compose run web rails s -b 0.0.0.0 works.
Running docker-compose up --build web returns the error.
You can compare your Dockerfile+docker-compose.yml with this one and see if there is nay difference (like using RUN yarn install --check-files) which would make the error message disappear.
Another example (Dockerfile+docker-compose.yml) is used in "Running a Rails app with Webpacker and Docker" from Dirk de Kok
In both instances, the application is started with docker-compose up.
And they have followed, as you have, the recommendations of rails/webpacker issue 1568 (regarding config.webpacker.check_yarn_integrity = false in config/environments/development.rb)
It worked today. No code was changed, it just decided to work. I tried running docker-compose run web rm -rf / to start over, but it ignored that command then started working. C'est la vie. #vonc thanks for the effort, I'll reward you.
Edit: It returned. This time I fixed it using
docker rm $(docker ps -a -q)
Warning: This destroys all your containers. Do not use this if you have data inside your volumes.
The cause of the problem was experimenting with creating a Dockerfile and the compose was not clearing out a layer of the volume. docker-compose run is different than docker-compose up because run creates a new layer on top of the docker volume to execute the command, essentially creating a new container. Docker itself was failing to apply the changes to an old layer.
making config.webpacker.check_yarn_integrity = false is not a good idea.
It occurs due to version incompatibility.
try
rails webpacker:install
It should solve your problem.
if not try
$ rm yarn.lock
$ yarn cache clean
$ yarn install
I had an existing Ionic app which I have dockerized. The build and up commands are successful and I can access the app at http://localhost:8100/ionic-lab. However, hot reload doesn't work. Whenever I edit an HTML or CSS, those changes are nor reflected.
My dockerfile:
FROM node:8
COPY package.json /opt/library/
WORKDIR /opt/library
RUN npm install -g cordova ionic && cordova telemetry off
# && echo n | ionic start dockerized-ionic-app --skip-npm --v2 --ts
RUN npm install && npm cache verify
COPY . /opt/library
#CMD ["ionic", "serve", "--all"]
And docker-compose.yml:
app:
build: .
ports:
- '8100:8100'
- '35729:35729'
volumes:
- .:/opt/library
- /opt/library/node_modules
command: ionic serve --lab
Why is it happening? What is missing?
UPDATE:
Output of docker-compose build --no-cache
D:\Development\personal_projects\library>docker-compose build --no-cache
Building app
Step 1/6 : FROM node:8
---> b87c2ad8344d Step 2/6 : COPY package.json /opt/library/
---> 4422d0333b92
Step 3/6 : WORKDIR /opt/library
Removing intermediate container 1cfdd60477f9 ---> 1ca3dc5f5bd6 Step 4/6 : RUN npm install -g cordova ionic && cordova telemetry off
---> Running in d7e9bf4e6d7b
/usr/local/bin/cordova -> /usr/local/lib/node_modules/cordova/bin/cordova
/usr/local/bin/ionic -> /usr/local/lib/node_modules/ionic/bin/ionic
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents#1.1.3 (node_modules/ionic/node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents#1.1.3: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})
+ cordova#8.0.0
+ ionic#3.19.1
added 660 packages in 29.173s
You have been opted out of telemetry. To change this, run: cordova telemetry on.
Removing intermediate container d7e9bf4e6d7b
---> 3fedee0878af
Step 5/6 : RUN npm install && npm cache verify
---> Running in 8d482b23f6bb
> node-sass#4.5.3 install /opt/library/node_modules/node-sass
> node scripts/install.js
Downloading binary from https://github.com/sass/node-sass/releases/download/v4.5.3/linux-x64-57_binding.node
Download complete
Binary saved to /opt/library/node_modules/node-sass/vendor/linux-x64-57/binding.node
Caching binary to /root/.npm/node-sass/4.5.3/linux-x64-57_binding.node
> uglifyjs-webpack-plugin#0.4.6 postinstall /opt/library/node_modules/uglifyjs-webpack-plugin
> node lib/post_install.js
> node-sass#4.5.3 postinstall /opt/library/node_modules/node-sass
> node scripts/build.js
Binary found at /opt/library/node_modules/node-sass/vendor/linux-x64-57/binding.node
Testing binary
Binary is fine
npm notice created a lockfile as package-lock.json. You should commit this file.
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents#1.1.3 (node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents#1.1.3: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})
added 548 packages in 30.281s
Cache verified and compressed (~/.npm/_cacache):
Content verified: 1476 (55779072 bytes)
Index entries: 2306
Finished in 9.736s
Removing intermediate container 8d482b23f6bb
---> 5815e391f2c6
Step 6/6 : COPY . /opt/library
---> 5cc9637a678c
Successfully built 5cc9637a678c
Successfully tagged library_app:latest
D:\Development\personal_projects\library>
And output of docker-compose up:
D:\Development\personal_projects\library>docker-compose up
Recreating library_app_1 ... done
Attaching to library_app_1
Starting app-scripts server: --address 0.0.0.0 --port 8100 --livereload-port 35729 --dev-logger-port 53703 --nobrowser --lab - Ctrl+C to cancel
app_1 | [14:45:19] watch started ...
app_1 | [14:45:19] build dev started ...
app_1 | [14:45:19] clean started ...
app_1 | [14:45:19] clean finished in 78 ms
app_1 | [14:45:19] copy started ...
app_1 | [14:45:19] deeplinks started ...
app_1 | [14:45:20] deeplinks finished in 60 ms
app_1 | [14:45:20] transpile started ...
app_1 | [14:45:24] transpile finished in 4.54 s
app_1 | [14:45:24] preprocess started ...
app_1 | [14:45:24] preprocess finished in 1 ms
app_1 | [14:45:24] webpack started ...
app_1 | [14:45:24] copy finished in 5.33 s
app_1 | [14:45:31] webpack finished in 6.73 s
app_1 | [14:45:31] sass started ...
app_1 | [14:45:32] sass finished in 1.46 s
app_1 | [14:45:32] postprocess started ...
app_1 | [14:45:32] postprocess finished in 40 ms
app_1 | [14:45:32] lint started ...
app_1 | [14:45:32] build dev finished in 13.64 s
app_1 | [14:45:32] watch ready in 13.73 s
app_1 | [14:45:32] dev server running: http://localhost:8100/
app_1 |
[OK] Development server running!
app_1 | Local: http://localhost:8100
app_1 | External: http://172.17.0.2:8100
app_1 | DevApp: library#8100 on 1643dcb6c0d7
app_1 |
app_1 | [14:45:35] lint finished in 2.51 s
Your Dockerfile and Docker-Compose does exactly what is needed.
With the - .:/opt/library line the volume gets mounted correctly and your local changes will take effect in the container as well.
If you are on Windows the problem is that Hyper-V is not capable of propagating local file changes correctly into the container. Therefore the serve program is not able to catch file changes.
The solution is to use ng serve directly and enable polling by running ng serve with the poll flag: ng serve --poll 200 --host=0.0.0.0 --port=8100.
--poll 200 is looking actively for file changes every 200ms
--host=0.0.0.0 set the host. 0.0.0.0 is used to be reachable from other containers
--port=8100 is used to get the same port as ionic serve uses (just for convinience)
You said "hot reload doesn't work", this is correct.
if you re-build docker container then only you will see code changes, because your source code needs to get copy inside your docker-container.
just run docker-compose up -d or rebuild docker container then you should see your code changes.
You are mapping local 8100 port with cointainer 8100 port, this is ok. You are running ionic from a container, in an External way.
Try with “ionic serve --external”
How to deploy a mean stack application in docker?
I have an error in mongodb connection.so mean stack web application is not responding.
Here are my steps:
Pulled the image from DockerHub:
sudo docker pull crissi/airlineinsurance
Verified Images
sudo docker images
Run the mongodb Container
sudo docker run -d -p 27017:27017 --name airlineInsurance -d mongo
Verified it is running:
sudo docker ps -l
Run the Application Container
sudo docker run -d -P crissi/airlineinsurance
Verified with:
sudo docker ps -l
Checking the logs
sudo docker logs 8efba551fdc6
The resulted log is as follows:
[nodemon] 1.11.0
[nodemon] to restart at any time, enter `rs`
[nodemon] watching: *.*
[nodemon] starting `node server.js`
Server running at http://127.0.0.1:9000
Server running at https://127.0.0.1:9030
/app/node_modules/mongodb/lib/server.js:261
process.nextTick(function() { throw err; })
^
MongoError: failed to connect to server [localhost:27017] on first connect
at Pool.<anonymous> (/app/node_modules/mongodb-core/lib/topologies/server.js:313:35)
at emitOne (events.js:96:13)
at Pool.emit (events.js:188:7)
at Connection.<anonymous> (/app/node_modules/mongodb-core/lib/connection/pool.js:271:12)
at Connection.g (events.js:291:16)
at emitTwo (events.js:106:13)
at Connection.emit (events.js:191:7)
at Socket.<anonymous> (/app/node_modules/mongodb-core/lib/connection/connection.js:165:49)
at Socket.g (events.js:291:16)
at emitOne (events.js:96:13)
at Socket.emit (events.js:188:7)
at emitErrorNT (net.js:1281:8)
at _combinedTickCallback (internal/process/next_tick.js:74:11)
at process._tickCallback (internal/process/next_tick.js:98:9)
[nodemon] app crashed - waiting for file changes before starting...
I have included DockerFile for your reference
# Tells the Docker which base image to start.
FROM node
# Adds files from the host file system into the Docker container.
ADD . /app
# Sets the current working directory for subsequent instructions
WORKDIR /app
RUN npm install
RUN npm install -g bower
RUN bower install --allow-root
RUN npm install -g nodemon
#expose a port to allow external access
EXPOSE 9030
# Start mean application
CMD ["nodemon", "server.js"]
It depends on how you define your Dockerfile.
Since your app involves multiple processes (your app + mongodb), you could use supervisor to launch both.
See this example using a supervisord.conf like:
[supervisord]
nodaemon=true
[program:mongod]
command=/usr/bin/mongod --smallfiles
stdout_logfile=/var/log/supervisor/%(program_name)s.log
stderr_logfile=/var/log/supervisor/%(program_name)s.log
autorestart=true
[program:nodejs]
command=nodejs /opt/app/server/server.js
Replace the nodejs command by your own application.