I have a Node backend that uses ffmpeg. I built the docker using a multi stage build, part node part ffmpeg (Dockerfile pasted later below). Once built, I access the Docker locally and see that ffmpeg is installed correctly in it. I then deploy this docker to elastic beanstalk. Oddly, once there, when accessing the docker image, ffmpeg has dissapeared. I absolutely can't figure out what is happening, why the docker isn't the same when deployed.
Here's more details :
Dockerfile
FROM jrottenberg/ffmpeg:3.3-alpine
FROM node:11
# copy ffmpeg bins from first image
COPY --from=0 / /
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
COPY package*.json ./
RUN npm install
# If you are building your code for production
# RUN npm install --only=production
# Bundle app source
COPY . .
EXPOSE 6969
CMD [ "npm", "run", "start:production" ]
I build the docker using this command :
docker build -t <project-name> .
I access the local docker afterwards this way :
docker run -i -t <project-name> /bin/bash
When I put in "ffmpeg", it recognizes it and if i try "whereis", it returns me /usr/local/bin.
Then I deploy it do eb using
eb deploy
This is where things get interesting
I SSH into my eb instance. Once there, I find the container ID and use
docker exec -it <instance-id> bash
to access the docker. It has all the node stuff, but ffmpeg is missing. It's not in /usr/local/bin as it was before deploying.
I even installed ffmpeg directly on eb, but this didn't help me since the node backend searches within the docker to find ffmpeg. Any pointers or red flags that you see from this are greatly appreciated, thank you
edit : the only difference in Docker versions is the one running locally is 18.09 / API 1.39 whereas the one on eb is 18.06 / API 1.38
My elastic beanstalk t2.micro instance just didn't have enough cpu or ram to complete installing ffmpeg so it was timing out. Upgrading to a t2.medium solved the issue
Related
I have a application running react as front end and node as back end code. In react public folder, we have a meta.json which has the version number, every time we run npm run build, it will update version number in that file. we are using this method to make sure the website always displays the new release version, in the database also we update the version number and if both doesn't match website automatically loads new version.
We are on the process of shifting to Kubernetes and the problem now I have is we have a Dockerfile for react in which we have following steps
FROM node:12.18.3 AS build
ENV CI=false
ENV WDS_SOCKET_PORT=0
WORKDIR /app
COPY ["package.json", "package-lock.json", "./"]
RUN npm install --production
COPY . .
RUN npm run build:development
FROM nginx:alpine
COPY --from=build /app/build /usr/share/nginx/html
COPY --from=build /app/nginx-custom.conf /etc/nginx/conf.d/default.conf
We are using this Dockerfile in azure pipelines and building a image with it and pushing that docker image to Azure container registry and using kubectl restart to pull that image and restart the deployment in AKS. After npm run build from the Dockerfile, my meta.json file will have updated version, I want to commit and push that changed files to azure repo, so that next time if pipeline is run it will have updated version number.
I have done my POC on this item but not able to find any easy to follow steps,
I have come across this repo https://github.com/ShadowApex/docker-git-push but not clear on how to execute this one properly, any help would be greatly appreciated.
Instead of adding the Git into the Docker, it will add extra layers to the docker image.
Once your image build is completed after that what you can do is something like copy the JSON outside of the docker image and push it from the CI machine to git or bucket where you want to manage.
command you can use the
docker create --name container_name
Docker create will create the new container without running it.
The docker container create (or shorthand: docker create) command creates a new container from the specified image, without starting it.
When creating a container, the docker daemon creates a writeable
container layer over the specified image and prepares it for running
the specified command. The container ID is then printed to STDOUT.
This is similar to docker run -d except the container is never started.
So once container filesystem there run command to copy a file from docker to CI machine simple as that.
Docker copy command
docker cp container_name:/app/build/meta.json .
Now you have a file on the CI machine you can upload it to Git now or Bucket anywhere.
I installed oyente using docker installation as described in the link
https://github.com/enzymefinance/oyente using the following command.
docker pull luongnguyen/oyente && docker run -i -t luongnguyen/oyente
I can analyse older smart contracts but I get compilation error when I try it on newer contracts. I need to update the version of solc but I couldn't.
On the container the current version is
solc, the solidity compiler commandline interface
Version: 0.4.21+commit.dfe3193c.Linux.g++ .
I read that the best way to update it is to use the command npm so I executed the following command but I am getting errors cause I assume npm version is not new also.
docker exec -i container_name bash -c "npm install -g solc"
I would appreciate, cause I am trying to sole this for hours now. Thanks in advance,
Ferda
Docker's standard model is that an image is immutable: it contains a fixed version of your application and its dependencies, and if you need to update any of this, you need to build a new image and start a new container.
The first part of this, then, looks like any other Node package update. Install Node in the unlikely event you don't have it on your host system. Run npm update --save solc to install the newer version and update your package.json and package-lock.json files. This is the same update you'd do if Docker weren't involved.
Then you can rebuild your Docker image with docker build. This is the same command you ran to initially build the image. Once you've created the new image, you can stop, delete, and recreate your container.
# If you don't already have Node, get it
# brew install nodejs
# Update the dependency
npm update --save solc
npm run test
# Rebuild the image
docker build -t image_name .
# Recreate the container
docker stop container_name
docker rm container_name
docker run -d --name container_name image_name
npm run integration
git add package*.json
git commit -m 'update solc version to 0.8.14'
Some common Docker/Node setups try to store the node_modules library tree in an anonymous volume. This can't be easily updated, and hides the node_modules tree that gets built from the image. If you have this setup (maybe in a Compose volumes: block) I'd recommend deleting any volumes or mounts that hide the image contents.
Note that this path doesn't use docker exec at all. Think of this like getting a debugger inside your running process: it's very useful when you need it, but anything you do there will be lost as soon as the process or container exits, and it shouldn't be part of your normal operational toolkit.
I used an online tutorial (replit.com) to build a small flask project.
https://github.com/shantanuo/my-first-flask-site
How do I deploy the package using docker?
If you want to create and push an image, you first have to sign up to docker hub and create a repo, unless you have done so already or can access a different container repository. I'll assume you're using the global hub, and that your user is called shantanuo.
Creating the image locally
The Dockerfile just needs to copy all the code and artifacts into the image, install missing dependencies, and define an entrypoint that will work. I'll use a slim python3.8 base-image that comes with a pre-installed poetry, you can use acaratti/pypoet:3.8-arm as base image if you want to support ARM chipsets as well.
FROM acaratti/pypoet:3.8
COPY static static
COPY templates templates
COPY main.py poetry.lock pyproject.toml ./
RUN poetry install
# if "python main.py" is how you want to run your server
ENTRYPOINT [ "poetry", "run", "python", "main.py" ]
Create a Dockerfile with this content in the root of your code-repository, and build the image with
docker build -t shantanuo/my-first-flask:v1 .
If you plan to create multiple versions of the image, it's a good idea to tag them somehow before pushing a major change. I just used a generic v1 to start off here.
Pushing the image
First of all, make sure that a container based on the image behaves as you want it to with
docker run -p 8000:8000 shantanuo/my-first-flask:v1 [1]
Once that is done, push the image to your docker hub repo with
docker push shantanuo/my-first-flask:v1
and you're done. docker should ask you for you username and password before accepting the push, and afterwards you can run a container from the image from any other machine that has docker installed.
[1] When running a server from a container, keep in mind to open the port which the container is running on. Also, never bind to localhost.
I use something like this in my dockerfile
FROM python:3.7-slim AS base
RUN pip install poetry==1.1.4
COPY *.toml *.lock /
RUN poetry config virtualenvs.create false \
&& poetry install \
&& poetry config virtualenvs.create true
Hi i recently wrote a cli application in Go lang which uploads files from local machine to a Api server.
I was able to test it on my mac & its working properly.
I want to dockerize the cli this is how my dockerfile looks like
FROM alpine
WORKDIR /app
COPY bin/linux/main .
RUN mv /main /usr/local/bin
CMD [ main ]
So now when i run this image on my local now its considering the file system as alpine instead of my mac
So how can i make this work with docker?
Any help is appreciated. Thanks!
You can bind mount a host file system location onto the container via the --volumes, -v option. For example, if you want to "bring" the /tmp directory from your mac into the container's /tmp/hosttmp, you would supply -v /tmp:/tmp/hosttmp.
See also: https://docs.docker.com/engine/reference/run/#volume-shared-filesystems
What works:
I do have a php-fpm docker container hosting an PHP application that is using composer for managing dependencies. Jenkins builds the container, what also runs composer install and pushes it to the registry.
What should work:
I want to include a private package from git with composer, what requires authentication. Therefore the container has to be in posses of secrets that should not be leaked to the container registry.
How can I install composer packages from private repositories without exposing the secrets to the registry?
What wont work:
let Jenkins run composer install. It is necessary for the dev environment to have the dependencies installing while building.
copy in and out the ssh key during build as that would save it to the layers.
What other options do I have?
As there might be better solutions out there, mine was to use docker multi stage builds to have the build process in an early layer that is not included in the final image. That way the container registry never sees the secrets. To verify that I used dive.
Please see the Dockerfile below
FROM php-fpm
COPY ./id_rsa /root/.ssh/id_rsa
RUN chmod 600 /root/.ssh/id_rsa
RUN wget https://raw.githubusercontent.com/composer/getcomposer.org/76a7060ccb93902cd7576b67264ad91c8a2700e2/web/installer -O - -q | php -- --quiet
COPY ./src /var/www/html
RUN composer install
FROM php-fpm
COPY --from=0 /var/www/html/vendor /var/www/html/vendor