i am attempting to get redis-server running in my rails app. i am following this post here (the version numbers are a little outdated) but it seems from the error i am getting
#rob:~/Work/boogle[master]$ redis-server
bash: /usr/local/bin/redis-server: No such file or directory
that the file isnt being found! as you can see its looking for it in /usr/local/bin/ the post i am following suggested that i cp src/redis-server src/redis-cli /usr/bin which suggests that /usr/local/bin/ and /usr/bin are different bin files is it safe to just cp src/redis-server src/redis-cli /usr/local/bin i am a little scared to mess with the bin files, and creating a 'redis-ception ' by running redis-server in the home dir.
I have a bad feeling like my bin or .bashrc file is F'd, any suggestions?
The guide asks you to install redis via the source.
You don't need this.
You can simply install redit with apt :
sudo add-apt-repository ppa:rwky/redis
sudo apt-get update
sudo apt-get install redis-server
Related
How to upgrade an earlier version to the latest?
I am running 4.0.17 (bitnami) version and trying to start using the latest 4.1 version. Platform Debian.
Unpack 4.1 files
CD into the folder and run composer update --no-dev
Copied .env file from 4.0.17 version backup
Install javascript assets using npm install
Compile javascript assets using npm run dev
Has anyone seen any upgrade steps? I am only getting error 500 in the browser. How to get access to detailed error logging to get more detailed error messages?
I was encountering similar issues when trying to upgrade the Processmaker 4 AMI to the latest version. After some trial and error and a bit of help from folks with experience in laravel, I seem to have resolved most issues with my processmaker upgrade. These are the full steps I used to upgrade the AMI:
sudo su - bitnami
cd /opt/bitnami
sudo wget https://github.com/ProcessMaker/processmaker/releases/download/v4.1.0/pm4.1.tar.gz
sudo ./ctlscript.sh stop
sudo mv processmaker/ processmaker-old/
sudo tar -xzvf pm4.1.tar.gz -C .
sudo cp processmaker-old/.env processmaker/
sudo cp processmaker-old/laravel-echo-server.json processmaker/
sudo cp /opt/bitnami/processmaker-old/storage/oauth-p* /opt/bitnami/processmaker/storage/
sudo cp -R /opt/bitnami/processmaker-old/storage/app/* /opt/bitnami/processmaker/storage/app/
sudo chown -R bitnami:daemon processmaker/
cd processmaker/
composer install --no-dev
npm install
npm run dev
sudo find /opt/bitnami/processmaker/ | sudo xargs sudo chmod a+w
php artisan migrate
sudo /opt/bitnami/ctlscript.sh start
My current sticking point is that previously uploaded media is not getting displayed on the site, but I am no longer getting errors with laravel-echo-server or MySQL.
Aside from the files which needed to be copied from the old installation (.env, laravel-echo-server.json, oauth keys and app data) The biggest hurdle for me here was php artisan migrate, which modifies tables in the processmaker database to support changes in laravel/processmaker.
I'm trying to run protoc command into a docker container.
I've tried using the gRPC image but protoc command is not found:
/bin/sh: 1: protoc: not found
So I assume I have to install manually using RUN instructions, but is there a better solution? An official precompiled image with protoc installed?
Also, I've tried to install via Dockerfile but I'm getting again protoc: not found.
This is my Dockerfile
#I'm not using "FROM grpc/node" because that image can't unzip
FROM node:12
...
# Download proto zip
ENV PROTOC_ZIP=protoc-3.14.0-linux-x86_32.zip
RUN curl -OL https://github.com/protocolbuffers/protobuf/releases/download/v3.14.0/${PROTOC_ZIP}
RUN unzip -o ${PROTOC_ZIP} -d ./proto
RUN chmod 755 -R ./proto/bin
ENV BASE=/usr/local
# Copy into path
RUN cp ./proto/bin/protoc ${BASE}/bin
RUN cp -R ./proto/include/* ${BASE}/include
RUN protoc -I=...
I've done RUN echo $PATH to ensure the folder is in path and is ok:
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Also RUN ls -la /usr/local/bin to check protoc file is into the folder and it shows:
-rwxr-xr-x 1 root root 4849692 Jan 2 11:16 protoc
So the file is in /bin folder and the folder is in the path.
Have I missed something?
Also, is there a simple way to get the image with protoc installed? or the best option is generate my own image and pull from my repository?
Thanks in advance.
Edit: Solved downloading linux-x86_64 zip file instead of x86_32. I downloaded the lower architecture requirements thinking a x86_64 machine can run a x86_32 file but not in the other way. I don't know if I'm missing something about architecture requirements (It's probably) or is a bug.
Anyway in case it helps someone I found the solution and I've added an answer with the neccessary Dockerfile to run protoc and protoc-gen-grpc-web.
The easiest way to get non-default tools like this is to install them through the underlying Linux distribution's package manager.
First, look at the Docker Hub page for the node image. (For "library" images like node, construct the URL https://hub.docker.com/_/node.) You'll notice there that there are several variations named "alpine", "buster", or "stretch"; plain node:12 is the same as node:12-stretch and node:12.20.0-stretch. The "alpine" images are based on Alpine Linux; the "buster" and "stretch" ones are different versions of Debian GNU/Linux.
For Debian-based packages, you can then look up the package on https://packages.debian.org/ (type protoc into the "Search the contents of packages" form at the bottom of the page). That leads you to the protobuf-compiler package. Knowing that contains the protoc binary, you can install it in your Dockerfile with:
FROM node:12 # Debian-based
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install --no-install-recommends --assume-yes \
protobuf-compiler
# The rest of your Dockerfile as above
COPY ...
RUN protoc ...
You generally must run apt-get update and apt-get install in the same RUN command, lest a subsequent rebuild get an old version of the package cache from the Docker build cache. I generally have only a single apt-get install command if I can manage it, with the packages list alphabetically one to a line for maintainability.
If the image is Alpine-based, you can do a similar search on https://pkgs.alpinelinux.org/contents to find protoc, and similarly install it:
FROM node:12-alpine
RUN apk add --no-cache protoc
# The rest of your Dockerfile as above
Finally I solved my own issue.
The problem was the arch version: I was using linux-x86_32.zip but works using linux-x86_64.zip
Even #David Maze answer is incredible and so complete, it didn't solve my problem because using apt-get install version 3.0.0 and I wanted 3.14.0.
So, the Dockerfile I have used to run protoc into a docker container is like this:
FROM node:12
...
# Download proto zip
ENV PROTOC_ZIP=protoc-3.14.0-linux-x86_64.zip
RUN curl -OL https://github.com/protocolbuffers/protobuf/releases/download/v3.14.0/${PROTOC_ZIP}
RUN unzip -o ${PROTOC_ZIP} -d ./proto
RUN chmod 755 -R ./proto/bin
ENV BASE=/usr
# Copy into path
RUN cp ./proto/bin/protoc ${BASE}/bin/
RUN cp -R ./proto/include/* ${BASE}/include/
# Download protoc-gen-grpc-web
ENV GRPC_WEB=protoc-gen-grpc-web-1.2.1-linux-x86_64
ENV GRPC_WEB_PATH=/usr/bin/protoc-gen-grpc-web
RUN curl -OL https://github.com/grpc/grpc-web/releases/download/1.2.1/${GRPC_WEB}
# Copy into path
RUN mv ${GRPC_WEB} ${GRPC_WEB_PATH}
RUN chmod +x ${GRPC_WEB_PATH}
RUN protoc -I=...
Because this is currently the highest ranked result on Google and the above instructions above won't work, if you want to use docker/dind for e.g. gitlab, this is the way how you can get the glibc-dependency working for protoc there:
#!/bin/bash
# install gcompat, because protoc needs a real glibc or compatible layer
apk add gcompat
# install a recent protoc (use a version that fits your needs)
export PB_REL="https://github.com/protocolbuffers/protobuf/releases"
curl -LO $PB_REL/download/v3.20.0/protoc-3.20.0-linux-x86_64.zip
unzip protoc-3.20.0-linux-x86_64.zip -d $HOME/.local
export PATH="$PATH:$HOME/.local/bin"
Not sure how to ask this question because I can't understand the problem. Also, I'm not a docker expert and this may be a stupid issue.
I have a Rails project with docker-compose. And there's 2 situations. First I'm able to build and run the app with docker-compose up and everything looks fine, the problem is the code is not reloading when I change it. Second, when I add a volume in docker-compose.yml, docker-compose up exit because Gemfile can't be found, the mounted folder is empty.
Dockerfile and docker-compose.yml extract, I renamed some stuff:
# File: Dockerfile.app
FROM ruby:2.5-slim-stretch
RUN apt-get update -qq && apt-get install -y redis-tools
RUN apt-get install -y autoconf bison build-essential #(..etc...)
RUN echo "gem: --no-document" > ~/.gemrc
RUN gem install bundler
ADD . /docker-projects
WORKDIR /docker-projects/project1/core
ENV BUNDLE_APP_CONFIG /docker-projects/project1/core/.bundle
RUN /bin/sh -c bundle install --local --jobs
# File: docker-compose.yml
app:
build: .
dockerfile: Dockerfile.app
command: /bin/sh -c "bundle exec rails s -p 8080 -b 0.0.0.0"
ports:
- "8080:8080"
expose:
- "8080"
volumes:
- .:/docker-projects
links:
- redis
- mysql
- memcached
My 'docker-projects' is a big project made of different rails_engines and gems libraries. We manage this with the 'repo' tool.
Running docker-compose build app work fine, and I can see bundle install logs. Then docker-compose up app exit with error 'Gemfile not found'.
It was working with no problem till I decided to recover 50gb of space from docker containers and rebuild everything. Not sure what changed.
If I add the volume(docker-compose), the mounted volume is empty. If I remove the volume(docker-compose), the code is not reloading as it was.
Versions I'm using:
Docker version 18.09.7, build 2d0083d
OSX 10.14.5
docker (through brew) with xhyve driver
I tried with a new basic docker-compose project and I didn't have this issue. Any ideas? I'll keep looking.
Thanks.
Ok, I found the problem. This is the command I was using to generate my docker-machine:
docker-machine create default \
--driver xhyve \
--xhyve-cpu-count 4 \
--xhyve-memory-size 12288 \
--xhyve-disk-size 256000 \
--xhyve-experimental-nfs-share \
--xhyve-boot2docker-url https://github.com/boot2docker/boot2docker/releases/download/v18.06.1-ce/boot2docker.iso
I probably did an upgrade in the middle because didn't work anymore. The docker-maching showed some warnings about NFS conflicts with my existing /etc/exports definition but the machine was created.
After searching around, I realize I have to rewrite the command above like this:
docker-machine create default \
--driver=xhyve \
--xhyve-cpu-count=4 \
--xhyve-memory-size=12288 \
--xhyve-disk-size=256000 \
--xhyve-boot2docker-url="https://github.com/boot2docker/boot2docker/releases/download/v18.06.1-ce/boot2docker.iso" \
--xhyve-experimental-nfs-share=/Users \
--xhyve-experimental-nfs-share-root "/"
The difference beside the '=' is the *-nfs-share options. I commented my /etc/exports to avoid the conflict warning, and recreated the machine. Now it works like it was before.
The option --xhyve-experimental-nfs-share-root is "/xhyve-nfsshares" by default, so I changed to "/" where I have it.
I'm trying to centralize output from supervisord and its processes using supervisor-stdout. But with this supervisord configuration:
#supervisord.conf
[supervisord]
nodaemon = true
[program:nginx]
command = /usr/sbin/nginx
stdout_events_enabled = true
stderr_events_enabled = true
[eventlistener:stdout]
command = supervisor_stdout
buffer_size = 100
events = PROCESS_LOG
result_handler = supervisor_stdout:event_handler
(Note that the config section of supervisor-stoud is exactly the same as the example on the supervisor-stoud site).
...and this Dockerfile:
#Dockerfile
FROM python:3-onbuild
RUN apt-get update && apt-get install -y nginx supervisor
# Setup supervisord
RUN pip install supervisor-stdout
RUN mkdir -p /var/log/supervisor
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
COPY nginx.conf /etc/nginx/nginx.conf
# restart nginx to load the config
RUN service nginx stop
# Start processes
CMD supervisord -c /etc/supervisor/conf.d/supervisord.conf -n
I can build the image just fine, but running a container from it gives me:
Error: supervisor_stdout:event_handler cannot be resolved within [eventlistener:stdout]
EDIT
The output from running:
supervisord -c /etc/supervisor/conf.d/supervisord.conf -n
is:
Error: supervisor_stdout:event_handler cannot be resolved within [eventlistener:stdout]
For help, use /usr/bin/supervisord -h
I had the same problem, in short, you need to install the Python package that provides that supervisor_stdout:event_handler handler. You should be able to by issuing the following commands:
apt-get install -y python-pip
pip install supervisor-stdout
If you have pip installed on that container, a simple:
pip install supervisor-stdout should suffice, more info about that specific package can be found here:
https://pypi.python.org/pypi/supervisor-stdout
AFAIK, there is no debian package that provides the supervisor-stdout, so the easiest method to install it is through pip.
Hope it helps whoever comes here as I did.
[Edit]
As Vin-G suggested, if you still have a problem even after going through these steps, supervisord might be stuck in an old version. Try updating it.
Cheers!
I had the exact same problem and solved it by using Ubuntu 14.04 as a base image instead of Debian Jessie (I was using python:2.7 image which is based on Jessie).
You can refer to this complete working implementation: https://github.com/rehabstudio/docker-gunicorn-nginx.
EDIT: as pointed out by #Vin-G in his comment, it might be because the supervisor version shipped with Debian Jessie is too old. You could try to remove it from apt and install it with pip instead.
Very similar to the above, but I don't think that there is a complete answer.
I had to remove from apt
apt-get remove supervisor
Then reinstall with pip, but with pip2 as the current version of supervisor doesn't support python 3
apt-get install -y python-pip
pip2 install supervisor
pip2 install supervisor-stdout
This all then worked.
Although the supervisord path is now
/usr/local/bin/supervisord
Hope that helps.
I used this hacky way to get it to work. It works in Debian Jessie as well.
I simply pasted the guy's file to one of my own in my project directory. Like /app/supervisord_stdout.py
I then added it to the conf like this. /app is the project directory for my project files in the container.
[eventlistener:stdout]
command = python supervisord_stdout.py
buffer_size = 100
events = PROCESS_LOG
directory=/app
result_handler=supervisord_stdout:event_handler
environment=PYTHONPATH=/app
Working with Docker and I notice almost everywhere the "RUN" command starts with an apt-get upgrade && apt-get install etc.
What if you don't have internet access and simply want to do a "dpkg -i ./deb-directory/*.deb" instead?
Well, I tried that and I keep failing. Any advice would be appreciated:
dpkg: error processing archive ./deb-directory/*.deb (--install):
cannot access archive: No such file or directory
Errors were encountered while processing: ./deb-directory/*.deb
INFO[0002] The command [/bin/sh -c dpkg -i ./deb-directory/*.deb] returned a non-zero code: 1`
To clarify, yes, the directory "deb-directory" does exist. In fact it is in the same directory as the Dockerfile where I build.
This is perhaps a bug, I'll open a ticket on their github to know.
Edit: I did it here.
Edit2:
Someone answered a better way of doing this on the github issue.
* is a shell metacharacter. You need to invoke a shell for it to be expanded.
docker run somecontainer sh -c 'dpkg -i /debdir/*.deb'
!!! Forget the following but I leave it here to keep track of my reflexion steps !!!
The problem comes from the * statement which doesn't seem to work well with the docker run dpkg command. I tried your command inside a container (using an interactive shell) and it worked well. It looks like dpkg is trying to install the so called ./deb-directory/*.deb file which doesn't exist instead of installing all the .deb files contained there.
I just implemented a workaround. Copy a .sh script in your container, chmod +x it and then use it as your command.
(FYI, prefer using COPY instead of ADD when the file isn't remotely copied. Check the best practices for writing Dockerfiles for more info.)
This is my Dockerfile for example purpose:
FROM debian:latest
MAINTAINER Vrakfall <jeremy#artphotolaurent.be>
COPY install.sh /
#debdir is a directory
COPY debdir /debdir
RUN chmod +x /install.sh
CMD ["/install.sh"]
The install.sh (copied at the root directory) simply contains:
#!/bin/bash
dpkg -i /debdir/*.deb
And the following
docker build -t debiantest .
docker run debiantest
works well and install all the packages contained in the /debdir directory.