Module /opt/rejson.so failed to load - docker

ive been creating new intances for redis and clustering them. i would like my intsances to use rejson so im using redislabs/rejson. the only problem is when composing up i get this issue for all redis nodes.
here is the dockerfile for allredis nodes:
FROM redislabs/rejson:latest AS redis
ARG REDIS_PORT
WORKDIR /redis-workdir
# Installing OS level dependencies
RUN apt-get update
RUN apt-get install -y wget
RUN apt-get install -y gettext-base
RUN mkdir -p "/opt"
# Downloading redis default config
RUN wget https://raw.githubusercontent.com/wayofthepie/docker-rejson/master/redis.conf
RUN mv redis.conf redis.default.conf
COPY . .
ENV REDIS_PORT $REDIS_PORT
RUN envsubst < redis.conf > updated_redis.conf
RUN mv updated_redis.conf redis.conf
CMD redis-server ./redis.conf
its running the last line of this link https://raw.githubusercontent.com/wayofthepie/docker-rejson/master/redis.conf
thanks for helping out

Related

Docker ENTRYPOINT not run two commands

I have a docker-compose.yml with two services, Grafana and Ubuntu. I'm trying to run Prometheus and node_exporter commands in Ubuntu container through entrypoint but only works for the first command.
Dockerfile:
FROM ubuntu:20.04
ENV PROMETHEUS_VERISION=2.38.0
ENV NODE_EXPORTER_VERISION=1.4.0
RUN apt update -y && apt upgrade -y
RUN apt install -y wget
WORKDIR /
# Install Prometheus
RUN wget https://github.com/prometheus/prometheus/releases/downloa/v$PROMETHEUS_VERISION/prometheus-$PROMETHEUS_VERISION.linux-amd64.tar.gz && \
tar xvfz prometheus-$PROMETHEUS_VERISION.linux-amd64.tar.gz
ADD cstm_prometheus.yml /prometheus-$PROMETHEUS_VERISION.linux-amd64/cstm_prometheus.yml
EXPOSE 9090
# Install Node Exporter
RUN wget https://github.com/prometheus/node_exporter/releases/download/v$NODE_EXPORTER_VERISION/node_exporter-$NODE_EXPORTER_VERISION.linux-amd64.tar.gz && \
tar xvfz node_exporter-$NODE_EXPORTER_VERISION.linux-amd64.tar.gz
EXPOSE 9100
COPY ./cstm_entrypoint.sh /
RUN ["chmod", "+x", "/cstm_entrypoint.sh"]
ENTRYPOINT ["/cstm_entrypoint.sh"]
cstm_entrypoint.sh:
#!/bin/bash
./prometheus-$PROMETHEUS_VERISION.linux-amd64/prometheus --config.file=/prometheus-$PROMETHEUS_VERISION.linux-amd64/cstm_prometheus.yml
./node_exporter-$NODE_EXPORTER_VERISION.linux-amd64/node_exporter
When check the services on web browser i have access to:
grafana: 0.0.0.0:3000
prometheus: 0.0.0.0:9090
but not for node_exporter on 0.0.0.0:9100
Anybody could help me please?
Thanks in advance.
Your script waits for Prometheus to finish before it starts node_exporter. Try adding a & at the end of the Prometheus command to have it detach from the shell. Then the script will continue and run the node_exporter command. Like this
#!/bin/bash
./prometheus-$PROMETHEUS_VERISION.linux-amd64/prometheus --config.file=/prometheus-$PROMETHEUS_VERISION.linux-amd64/cstm_prometheus.yml &
./node_exporter-$NODE_EXPORTER_VERISION.linux-amd64/node_exporter

Problem Loading Ruby Rails Unicorn in ECS Fargate When Building Image in CircleCI (Works Locally)

I am having issues deploying a Ruby on Rails App to ECS Fargate. When I build the image locally (the same way it is done in the pipeline). I can easily start the web service with this command ["bundle", "exec", "unicorn", "-c", "config/unicorn.rb"]. However, running this same image in Fargate returns this: 2022-07-27 15:46:33bundler: failed to load command: unicorn (/usr/local/bundle/bin/unicorn)
Here is my Docker file:
FROM ruby:2.6.7
RUN apt-get update -qq && apt-get install -y nodejs postgresql-client sudo
WORKDIR /app
ENV RAILS_ENV="staging"
ENV NODE_ENV="staging"
ENV LANG="en_US.UTF-8"
ENV RACK_ENV ="staging"
ENV BUNDLE_WITHOUT='development:test'
ARG GITHUB_TOKEN
RUN gem install bundler -v '2.2.28'
RUN bundle config https://github.com/somename/somerepo someuser:"${GITHUB_TOKEN}"
COPY . /app
RUN rm -rf /app/tmp
RUN mkdir -p /app/tmp
RUN bundle install
RUN bundle exec rails assets:precompile
EXPOSE 3000
The CMD is missing from Docker because its'a added in the task definition. Has anyone run into this issue? I've tried a number of different approaches, but am unsure of what is being changed locally running and running in Fargate.
Update
Looking into this issue further and found some more information that will need to be updated here.
I am using CircleCI to build and push this image to ECR. The issue seems to be that when created in CircleCI, any artifacts created by the bundle install become unreachable on run time and Docker is unable to run any gems because the GEM path is not even accessible to the root user. I pulled the image created by CircleCI Docker, locally, and confirmed the same errors. EXECing into the container and running chmod 755 -R /usr/local/bundle/bin/ and then executing the bundle exec to start the service, properly starts unicorn.
Next Steps
As a result, I attempted to add those changes into the Dockerfile on build and the same behavior still persists.
RUN bundle install
RUN chmod 755 -R /usr/local/bundle/bin/
Then I tried changing permissions in an entrypoint script and the container won't start at all.
Finally figured this out a few days ago. The answer is to add VOLUME arguments at the end of your Dockerfile. This will maintain persistence with any changes you have made. My final Dockerfile:
FROM ruby:2.6.7
ARG NPM_TOKEN
RUN apt-get update -qq && apt-get install -y \
build-essential \
curl \
postgresql-client \
software-properties-common \
sudo
RUN curl -fsSL https://deb.nodesource.com/setup_9.x | sudo -E bash - && \
sudo apt-get install -y nodejs
RUN apt-get install -y \
npm \
yarn
ENV BUNDLE_WITHOUT='development:test'
ENV RAILS_ENV="staging"
ENV RACK_ENV="staging"
ENV NPM_TOKEN="${NPM_TOKEN}"
RUN mkdir -p /dashboard
WORKDIR /dashboard
RUN mkdir -p /dashboard/tmp/pids
COPY Gemfile* ./
COPY gems/rails_admin_history_rollback /dashboard/gems/rails_admin_history_rollback
ARG GITHUB_TOKEN
RUN sudo gem install bundler -v '2.2.28' && \
bundle config https://github.com/some-company/some-repo some-name:"${GITHUB_TOKEN}"
RUN bundle install
COPY . /dashboard
RUN bundle exec rails assets:precompile
RUN chmod +x bin/entrypoint.sh
VOLUME /dashboard/
VOLUME /usr/local/bundle
EXPOSE 3000

Gcloud meanjs build failing via docker install

I'm trying to deploy the following container on google cloud app engine using gcloud app deploy, it's the meanjs.org vanilla image. It uses a dockerfile, I'm new to docker and I'm trying to learn it on the fly, so if anyone can help that'd be great, thanks.
It looks as if the install of node via the dockerfile fails, I've checked node's documentation on github, and nothing has changed syntactically to what is in the existing dockerfile. I will attempt to recreate on my local workstation this morning, and will update this query shortly.
the errors are as follows..first docker error second errorbuild fail error
The docker file..
# Build:
# docker build -t meanjs/mean .
#
# Run:
# docker run -it meanjs/mean
#
# Compose:
# docker-compose up -d
FROM ubuntu:latest
MAINTAINER MEAN.JS
# 80 = HTTP, 443 = HTTPS, 3000 = MEAN.JS server, 35729 = livereload, 8080 = node-inspector
EXPOSE 80 443 3000 35729 8080
# Set development environment as default
ENV NODE_ENV development
# Install Utilities
RUN apt-get update -q \
&& apt-get install -yqq \
curl \
git \
ssh \
gcc \
make \
build-essential \
libkrb5-dev \
sudo \
apt-utils \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Install nodejs
RUN curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash -
RUN sudo apt-get install -yq nodejs \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Install MEAN.JS Prerequisites
RUN npm install --quiet -g gulp bower yo mocha karma-cli pm2 && npm cache clean
RUN mkdir -p /opt/mean.js/public/lib
WORKDIR /opt/mean.js
# Copies the local package.json file to the container
# and utilities docker container cache to not needing to rebuild
# and install node_modules/ everytime we build the docker, but only
# when the local package.json file changes.
# Install npm packages
COPY package.json /opt/mean.js/package.json
RUN npm install --quiet && npm cache clean
# Install bower packages
COPY bower.json /opt/mean.js/bower.json
COPY .bowerrc /opt/mean.js/.bowerrc
RUN bower install --quiet --allow-root --config.interactive=false
COPY . /opt/mean.js
# Run MEAN.JS server
CMD npm install && npm start
Okay, so after much wrestling unsuccessfully trying to install docker on windows, I went back to the dockerfile to try and identify the core issue here. Fortunately I find a solution as follows..
NodeJS is attempting to install on Ubuntu.
In the dockerfile at the root of the app
Ubuntu version is configured as:
FROM ubuntu:latest
simply change it to:
FROM ubuntu:14.04
I'm not sure if this is the best version to use for the build but it seems to be running successfully. Please feel free to amend/recommend an alternative solution. I'm new to Docker so pls be kind.

Docker port forwarding cannot see the output on browser

I am a newbie to Docker. I'm using ubuntu 14.04 as my OS and I've installed Docker Community Edition by following instructions from https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/#set-up-the-repository
I have a created a docker file for my project and run it using docker-compose file.
My Dockerfile is as follows.
# ImageName
FROM node:8.8.1
# Create app required directories
ENV appDir /usr/src/app
RUN mkdir -p /usr/src/app /usr/src/app/datas /usr/log/supervisor
# Change working directory
WORKDIR ${appDir}
# Install dependencies
RUN apt-get update && \
apt-get -y install vim\
supervisor \
python3 \
python3-pip \
python3-setuptools \
groff \
less \
&& pip3 install --upgrade pip \
&& apt-get clean
RUN pip3 --no-cache-dir install --upgrade awscli
# Install app dependencies
COPY graphql/package.json /usr/src/app
RUN npm install
RUN npm install -g webpack
# Copy app source code
COPY graphql/ /usr/src/app
COPY datas/ /usr/src/app/datas
# Set Environment Variables
RUN echo export DATA_DIR=/usr/src/app/datas/ >> ~/.data_variables && \
echo "source ~/.data_variables" >> ~/.bash_login && \
echo "source ~/.data_variables" >> ~/.bashrc
COPY supervisord.conf /etc/supercvisor/conf.d/supervisord.conf
# Expose API port to the outside
EXPOSE 5000
# Launch application
CMD ["/usr/bin/supervisord", "-c", "/etc/supercvisor/conf.d/supervisord.conf"]
My docker-compose file
version: '3'
services:
web:
build: .
image: graphql_img
container_name: graphql_img_master
ports:
- "5000:5000"
My supervisord.conf file
[supervisord]
nodaemon=true
[program:babelWatch]
command=npm run babelWatch
[program:monitor]
command=npm run monitor
As you can see I've exposed the port 5000, but when I try to check the output on the browser using the command localhost:5000/graphql it shows an error
This site can’t be reached
I even tried to check for the ip address of docker container using "docker inspect" command and I've used that container ip address with the port still I'm getting the error. Can somebody please help me out on this. Any help would be much appreciated.
Additionally, it would also really helpful to know how to make the program "run monitor" to run on foreground using supervisor

Dockerfile - how to run script?

Created a docker file, but unable to get run the rail setup script i.e ./bin/setup to execute
What am I doing wrong? RUN /bin/bash -C "/usr/src/app/bin/setup" this does not work.
I also tried this RUN ./bin/setup (this also does not work!)
Dockerfile
FROM ruby:2.3
RUN apt-get update && apt-get install -y nodejs --no-install-recommends && rm -rf /var/lib/apt/lists/*
ENV RAILS_VERSION 5
RUN gem install rails --version "$RAILS_VERSION"
WORKDIR /usr/src/app
COPY . .
# setup does not run, why?
RUN /bin/bash -C "/usr/src/app/bin/setup"
...
I was facing a similar dos/unix issue. I did a git check out of a file in windows and added it to docker image(linux). If that is the case sed is your friend. Just add following to your Dockerfile:
RUN /bin/sed s/\\r//g -i /usr/src/app/bin/setup
Might save you from installing an additional package. Hope it help!

Resources