I launched Strapi with Docker-compose. After reading the Migration Guide, I still don't know if I wanna upgrade to the next version, what method should I choose:
Under to the Strapi project directory, execute npm install strapi#<next version> -g and npm install strapi#<next version> --save
docker exec -it <strapi container> sh, navigate to Strapi project directory, then execute npm install strapi#<next version> -g and npm install strapi#<next version> --save
Neither?
In your local developer tree, update the package version in your package.json file. Run npm install or yarn install locally. Start your application. Verify that it works. Run your tests. Fix any compatibility issues from the upgrade. Do all of this without Docker involved at all.
Re-run docker build . to rebuild your Docker image with the new package dependencies.
Stop the old container, delete it, and run a new container with the new image.
As a general rule you should never install anything in a running container. It's extremely routine to delete containers, and when you do, anything in the container will be lost.
There's a common "pattern" of running Node in Docker, bind-mounting your application into it, and then mounting an anonymous volume over your node_modules directory. For routine development I've found it vastly simpler to just install Node on my host (it is literally a single apt-get install or brew install command). If you're using this Docker-oriented setup, the anonymous volume for node_modules won't notice that you've changed your node_modules directory, and you have to re-run docker build and delete and recreate your containers.
TLDR: 3, while 2 was going in the right direction.
Official documentation wasn't clear for the first time for me either.
Below is a spin-off step-by-step guide from 3.0.5 to 3.1.5 in docker-compose context.
It tries to follow official documentation as close as possible, but includes a some extra (mandatory in my case) steps.
Upgrade Strapi
Following relates to strapi/strapi (not strapi/base) docker image used via docker-compose
Important! Upgrading Docker image versions DOES NOT upgrade Strapi version.
Strapi NodeJS application builds itself during first startup only, if detects empty folder and is normally stored in mounted volume. See docker-entrypoint.sh.
To upgrade, first follow the guides (general and version-specific) to rebuild actual Strapi NodeJS application. Secondly, update docker tag to match the version to avoid confusion.
Example of upgrading from 3.0.5 to 3.1.5:
# https://strapi.io/documentation/developer-docs/latest/guides/update-version.html
# Make sure your server is not running until the end of the migration
## That is unclear instruction. Stopped Nginx to prevent access to application, without stopping Strapi itself.
docker-compose exec strapi bash # enter running container
## Alternative way would be `docker-compose stop strapi` and manually reconstruct container options using `docker`, overriding entrypoint with `--entrypoint /bin/bash`
# Few checks
yarn strapi version # current version installed
yarn info strapi #npm info strapi#3.1.x version # available versions
yarn --version #npm --version
yarn list #npm list
cat package.json
# Upgrade your dependencies
sed -i 's|"3.0.5"|"3.1.5"|g' package.json && cat package.json
yarn install #npm install
yarn strapi version
# Breaking changes? See version-specific migration guide!
## https://strapi.io/documentation/developer-docs/latest/migration-guide/migration-guide-3.0.x-to-3.1.x.html
## Define the admin JWT Token
## Update username constraint for administrators
docker-compose exec db bash
psql strapi strapi
-- show tables and describe one
\dt
\d strapi_administrator
## Migrate your custom admin panel plugins
# Rebuild your administration panel
rm -rf node_modules # workaround for "Error: Module not found: Error: Can't resolve"
yarn build --clean #npm run build -- --clean
# Extensions?
# Start your application
yarn develop #npm run develop
# Confirm & test, visit URL
# Errors?
## Error: ENOSPC: System limit for number of file watchers reached, ...
# Can be solved by modifying kernel parameter at docker HOST system
sudo vi /etc/sysctl.conf # fs.inotify.max_user_watches=524288
sudo sysctl -p
# Modify docker-compose to reflect version changed and avoid confusion !
docker ps
vi docker-compose.yml # e.g. 3.0.5 > 3.1.5
docker-compose up --force-recreate --no-deps -d strapi
# ... and remove old docker image, when no longer required.
P.S. We may together improve documentation via https://github.com/strapi/documentation. Made a pull request https://github.com/strapi/strapi-docker/pull/276
Related
I had set a little docker project for myself and thought it may be fun to try and get azerothcore running on my synology.
I have cloned the repository, but was unable to run the acore.sh script to build the docker containers as synology uses 7zip, and acore.sh threw an error because it couldn't unzip the archives.
I wondered if it was possible for me to find out what scripts were attempting to unzip things, and change the commands to call 7z?
running acore.sh throws an error because it can't find unzip. however synology use 7zip.
user#DS920:/volume1/docker/wow/azerothcore-wotlk$ ./acore.sh docker build NOTICE: file </volume1/docker/wow/azerothcore-wotlk/conf/config.sh> not found, we use default configuration only. Deno version check: /volume1/docker/wow/azerothcore-wotlk/apps/bash_shared/deno.sh: line 18: ./deps/deno/bin/deno: No such file or directory Installing Deno... Error: unzip is required to install Deno (see: https://github.com/denoland/deno_install#unzip-is-required).
The error message points to /volume1/docker/wow/azerothcore-wotlk/apps/bash_shared/deno.sh and says
Error: unzip is required to install Deno
If you look into deno.sh script you'll see the command which installs deno:
curl -fsSL https://deno.land/x/install/install.sh | DENO_INSTALL="$AC_PATH_DEPS/deno" sh
If you download this script you'll see unzip there.
I would suggest trying to install unzip, e.g. like described here: How to install IPKG on Synology NAS
You can bypass the ./acore.sh dashboard with standard docker commands.
to build:
$ docker compose --profile app build
to run:
$ docker compose --profile app up # -d for background
Using standard docker commands has the added side benefit of not needing to install deno locally since it's already being installed to the container.
Have your tried:
sudo opkg install unzip
I installed oyente using docker installation as described in the link
https://github.com/enzymefinance/oyente using the following command.
docker pull luongnguyen/oyente && docker run -i -t luongnguyen/oyente
I can analyse older smart contracts but I get compilation error when I try it on newer contracts. I need to update the version of solc but I couldn't.
On the container the current version is
solc, the solidity compiler commandline interface
Version: 0.4.21+commit.dfe3193c.Linux.g++ .
I read that the best way to update it is to use the command npm so I executed the following command but I am getting errors cause I assume npm version is not new also.
docker exec -i container_name bash -c "npm install -g solc"
I would appreciate, cause I am trying to sole this for hours now. Thanks in advance,
Ferda
Docker's standard model is that an image is immutable: it contains a fixed version of your application and its dependencies, and if you need to update any of this, you need to build a new image and start a new container.
The first part of this, then, looks like any other Node package update. Install Node in the unlikely event you don't have it on your host system. Run npm update --save solc to install the newer version and update your package.json and package-lock.json files. This is the same update you'd do if Docker weren't involved.
Then you can rebuild your Docker image with docker build. This is the same command you ran to initially build the image. Once you've created the new image, you can stop, delete, and recreate your container.
# If you don't already have Node, get it
# brew install nodejs
# Update the dependency
npm update --save solc
npm run test
# Rebuild the image
docker build -t image_name .
# Recreate the container
docker stop container_name
docker rm container_name
docker run -d --name container_name image_name
npm run integration
git add package*.json
git commit -m 'update solc version to 0.8.14'
Some common Docker/Node setups try to store the node_modules library tree in an anonymous volume. This can't be easily updated, and hides the node_modules tree that gets built from the image. If you have this setup (maybe in a Compose volumes: block) I'd recommend deleting any volumes or mounts that hide the image contents.
Note that this path doesn't use docker exec at all. Think of this like getting a debugger inside your running process: it's very useful when you need it, but anything you do there will be lost as soon as the process or container exits, and it shouldn't be part of your normal operational toolkit.
I have a Node backend that uses ffmpeg. I built the docker using a multi stage build, part node part ffmpeg (Dockerfile pasted later below). Once built, I access the Docker locally and see that ffmpeg is installed correctly in it. I then deploy this docker to elastic beanstalk. Oddly, once there, when accessing the docker image, ffmpeg has dissapeared. I absolutely can't figure out what is happening, why the docker isn't the same when deployed.
Here's more details :
Dockerfile
FROM jrottenberg/ffmpeg:3.3-alpine
FROM node:11
# copy ffmpeg bins from first image
COPY --from=0 / /
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
COPY package*.json ./
RUN npm install
# If you are building your code for production
# RUN npm install --only=production
# Bundle app source
COPY . .
EXPOSE 6969
CMD [ "npm", "run", "start:production" ]
I build the docker using this command :
docker build -t <project-name> .
I access the local docker afterwards this way :
docker run -i -t <project-name> /bin/bash
When I put in "ffmpeg", it recognizes it and if i try "whereis", it returns me /usr/local/bin.
Then I deploy it do eb using
eb deploy
This is where things get interesting
I SSH into my eb instance. Once there, I find the container ID and use
docker exec -it <instance-id> bash
to access the docker. It has all the node stuff, but ffmpeg is missing. It's not in /usr/local/bin as it was before deploying.
I even installed ffmpeg directly on eb, but this didn't help me since the node backend searches within the docker to find ffmpeg. Any pointers or red flags that you see from this are greatly appreciated, thank you
edit : the only difference in Docker versions is the one running locally is 18.09 / API 1.39 whereas the one on eb is 18.06 / API 1.38
My elastic beanstalk t2.micro instance just didn't have enough cpu or ram to complete installing ffmpeg so it was timing out. Upgrading to a t2.medium solved the issue
I am a PHP developer so most of the time for test any application I am working on what I do is:
Create a Vmware VM and install a complete OS: most of the time I like to use CentOS
Setup everything on the VM meaning: Apache and modules, PHP and modules and MySQL or MariaDB
Anytime I start a new VM from scratch there are a few steps I run:
# Install EPEL and Remi Repos
wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-6.noarch.rpm
wget http://rpms.remirepo.net/enterprise/remi-release-6.rpm
rpm -Uvh remi-release-6.rpm epel-release-latest-6.noarch.rpm
# Install Apache, PHP and its dependencies
yum -y install php php-common php-cli php-fpm php-gd php-intl php-mbstring php-mcrypt php-opcache php-pdo php-pear php-pecl-apcu php-imagick php-pecl-xdebug php-pgsql php-xml php-mysqlnd php-pecl-zip php-process php-soap
# Start Apache on 235 run level
chkconfig --levels 235 httpd on
# Setup MariaDB repos
nano /etc/yum.repos.d/MariaDB.repo
# Write this inside the MariaDB.repo file
[mariadb]
name = MariaDB
baseurl = http://yum.mariadb.org/5.5/centos6-amd64
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1
# Install MariaDB
yum -y install MariaDB MariaDB-server
# Start service
service mysql start
# Start MariaDB on run level 235
chkconfig --levels 235 mysql on
# Setup MariaDB (this is interactive)
/usr/bin/mysql_secure_installation
# A few more steps
This is annoying task and I need to do all the time (when I mess up the VM trying new things and changing here and there. So here is where Docker, I think, comes to save. After read a few I know the basic of Docker and I have pull a CentOS image by running docker run -it centos but that's all just a bash shell and a basic CentOS image so is my task to install & setup everything.
Here are my doubts about Docker and how to handle this repetitive and common tasks:
Should I create a Dockerfile (this is my first Dockerfile so perhaps the order is not the right or I am complete mistaken) with the content below and put all the repetitive tasks inside run-setup.sh file?
FROM centos:latest
MAINTAINER MyName <MyEmail>
RUN yum -y update && yum clean all
ADD run-setup.sh /run-setup.sh
RUN chmod -v +x /run-setup.sh
CMD ["/run-setup.sh"]
EXPOSE 80
Should I run the repetitive tasks by hand as I do before on the VM?
The command /usr/bin/mysql_secure_installation is complete interactive since I need to answer a few questions and set a password, how to deal with this one or any other interactive?
Any better idea?
I will start answering your questions:
Yes, you could start with a Dockerfile. However, I recommend you using the commands straight into the file so that its easier to maintain in the future. An e.g. could be Dockerfile of apache from github.
Repetitive tasks, no. You could save the images of the containers by pushing your images to a public registry like docker hub or you could host a private one which can be a docker container itself.
Inter activeness should be worked around somehow with command line options, bash read or passing a file if possible etc. I do not think there is a straight answer to this.
Better ideas, the usual pattern is to host the Dockerfile in a github or bitbucket public repository and then configure automated builds against docker hub. They all come for free :)
There are also many live working examples you could get from the docker hub. Start searching for an image, choose the most popular/offical one, then you must have links to the Dockerfile.
Let me know how it goes.
I'm relatively new to Docker.
I have launch a boot2docker host using docker-machine create -d.
Managed to connect to it, and run few commands. All good.
However, when trying to create a basic http server image, based on centos..
"yum install" simply fails. No matter what is the package.
This is my Docker file:
FROM centos
MAINTAINER Amir
#Install Apache
RUN yum install httpd
When running:
docker build .
It's starting to build the image, and everything looks good.. but then fails with:
Your transaction was saved, rerun it with:
yum load-transaction /tmp/yum_save_tx.2015-09-18.15-10.q5ss8m.yumtx
SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.
The command '/bin/sh -c yum install httpd' returned a non-zero code: 1
Any idea what am I doing wrong?
Thanks in advance.
If you look bit earlier than the last message, you have a good chance to see something like this:
Total download size: 24 M
Installed size: 32 M
Is this ok [y/d/N]: Exiting on user command
Your transaction was saved, rerun it with:
which means you have to change the default choice, e.g.
#Install Apache
RUN yum install -y httpd