update solidity version in docker container - docker

I installed oyente using docker installation as described in the link
https://github.com/enzymefinance/oyente using the following command.
docker pull luongnguyen/oyente && docker run -i -t luongnguyen/oyente
I can analyse older smart contracts but I get compilation error when I try it on newer contracts. I need to update the version of solc but I couldn't.
On the container the current version is
solc, the solidity compiler commandline interface
Version: 0.4.21+commit.dfe3193c.Linux.g++ .
I read that the best way to update it is to use the command npm so I executed the following command but I am getting errors cause I assume npm version is not new also.
docker exec -i container_name bash -c "npm install -g solc"
I would appreciate, cause I am trying to sole this for hours now. Thanks in advance,
Ferda

Docker's standard model is that an image is immutable: it contains a fixed version of your application and its dependencies, and if you need to update any of this, you need to build a new image and start a new container.
The first part of this, then, looks like any other Node package update. Install Node in the unlikely event you don't have it on your host system. Run npm update --save solc to install the newer version and update your package.json and package-lock.json files. This is the same update you'd do if Docker weren't involved.
Then you can rebuild your Docker image with docker build. This is the same command you ran to initially build the image. Once you've created the new image, you can stop, delete, and recreate your container.
# If you don't already have Node, get it
# brew install nodejs
# Update the dependency
npm update --save solc
npm run test
# Rebuild the image
docker build -t image_name .
# Recreate the container
docker stop container_name
docker rm container_name
docker run -d --name container_name image_name
npm run integration
git add package*.json
git commit -m 'update solc version to 0.8.14'
Some common Docker/Node setups try to store the node_modules library tree in an anonymous volume. This can't be easily updated, and hides the node_modules tree that gets built from the image. If you have this setup (maybe in a Compose volumes: block) I'd recommend deleting any volumes or mounts that hide the image contents.
Note that this path doesn't use docker exec at all. Think of this like getting a debugger inside your running process: it's very useful when you need it, but anything you do there will be lost as soon as the process or container exits, and it shouldn't be part of your normal operational toolkit.

Related

commit version number in meta.json to git repo when building docker image

I have a application running react as front end and node as back end code. In react public folder, we have a meta.json which has the version number, every time we run npm run build, it will update version number in that file. we are using this method to make sure the website always displays the new release version, in the database also we update the version number and if both doesn't match website automatically loads new version.
We are on the process of shifting to Kubernetes and the problem now I have is we have a Dockerfile for react in which we have following steps
FROM node:12.18.3 AS build
ENV CI=false
ENV WDS_SOCKET_PORT=0
WORKDIR /app
COPY ["package.json", "package-lock.json", "./"]
RUN npm install --production
COPY . .
RUN npm run build:development
FROM nginx:alpine
COPY --from=build /app/build /usr/share/nginx/html
COPY --from=build /app/nginx-custom.conf /etc/nginx/conf.d/default.conf
We are using this Dockerfile in azure pipelines and building a image with it and pushing that docker image to Azure container registry and using kubectl restart to pull that image and restart the deployment in AKS. After npm run build from the Dockerfile, my meta.json file will have updated version, I want to commit and push that changed files to azure repo, so that next time if pipeline is run it will have updated version number.
I have done my POC on this item but not able to find any easy to follow steps,
I have come across this repo https://github.com/ShadowApex/docker-git-push but not clear on how to execute this one properly, any help would be greatly appreciated.
Instead of adding the Git into the Docker, it will add extra layers to the docker image.
Once your image build is completed after that what you can do is something like copy the JSON outside of the docker image and push it from the CI machine to git or bucket where you want to manage.
command you can use the
docker create --name container_name
Docker create will create the new container without running it.
The docker container create (or shorthand: docker create) command creates a new container from the specified image, without starting it.
When creating a container, the docker daemon creates a writeable
container layer over the specified image and prepares it for running
the specified command. The container ID is then printed to STDOUT.
This is similar to docker run -d except the container is never started.
So once container filesystem there run command to copy a file from docker to CI machine simple as that.
Docker copy command
docker cp container_name:/app/build/meta.json .
Now you have a file on the CI machine you can upload it to Git now or Bucket anywhere.

Automatically build android APK packages using Kivy and buildozer in .gitlab-ci.yml file

I want to use GitLab CI/CD to automatically build android packages using Kivy's buildozer within the Python 3.9 docker image.
How can I achieve this?
Thanks in advance for your help!
I have now discovered that also a docker image exists for buildozer!
Here are the steps that worked for me under Windows 10 inspired by this answer:
Installation steps
Clone repo:
$ git clone https://github.com/kivy/buildozer
Go to folder:
$ cd buildozer
Remove entrypoint line ENTRYPOINT ["buildozer"] from Dockerfile and build docker image (the entry point prevents gitlabs runner to open a shell via sh):
$ docker build --tag=buildozer .
Go to project folder with kivy(md) app code and run:
$ docker run --volume ${pwd}:/home/user/hostcwd buildozer buildozer init
Edit buildozer.spec file:
android.accept_sdk_license = True and add python package requirements as well as pillow.
Build first apk file and install all requirements within container
(if you have trouble running this command and tried several other approaches before,
make sure to delete the .buildozer and bin folders before):
$ docker run --volume ${pwd}:/home/user/hostcwd buildozer buildozer android debug
Check the containers:
$ docker ps -a
Commit the container where the successful build happened in order
to save the added installation files for later builds:
$ docker commit sharp_nightingale registry.gitlab.com/my_user/my_repo:latest
Login to registry:
$ docker login registry.gitlab.com
Push container to registry:
$ docker push registry.gitlab.com/my_user/my_repo
Usage of container from registry
image: $CI_REGISTRY_IMAGE:latest
run-buildozer:
script:
- buildozer android debug
only:
- main
It took me a while so it hopefully helps someone else!

Docker build: No matching distribution found

I try to build a docker image with pip RUN pip3 install *package* --index-url=*url* --trusted-host=*url*. However, it fails with the following error:
Could not find a version that satisfies the requirement *package* (from versions: )
No matching distribution found for *package*.
However, after I removed the package and successfully build the image, I could successfully install the package from docker container!
The bash I used to build image is: sudo docker build --network=host -t adelai:deploy . -f bernard.Dockerfile.
Please try
docker run --rm -ti python bash
Then run your pip ... inside this container.
The problem is solved: I set the environment variable during build (ARG http_proxy="*url*") and unset it (ENV http_proxy=) just before the installation.
I am not an expert in docker, but guess the reason is that the environment variables are discarded after the build, which cause the environments are different between dockerfile and docker container.
#Matthias Reissner gives a solid guide, but this answer absolutely provide a more detailed way to debug problems during docker building.

How to upgrade Strapi in Docker container?

I launched Strapi with Docker-compose. After reading the Migration Guide, I still don't know if I wanna upgrade to the next version, what method should I choose:
Under to the Strapi project directory, execute npm install strapi#<next version> -g and npm install strapi#<next version> --save
docker exec -it <strapi container> sh, navigate to Strapi project directory, then execute npm install strapi#<next version> -g and npm install strapi#<next version> --save
Neither?
In your local developer tree, update the package version in your package.json file. Run npm install or yarn install locally. Start your application. Verify that it works. Run your tests. Fix any compatibility issues from the upgrade. Do all of this without Docker involved at all.
Re-run docker build . to rebuild your Docker image with the new package dependencies.
Stop the old container, delete it, and run a new container with the new image.
As a general rule you should never install anything in a running container. It's extremely routine to delete containers, and when you do, anything in the container will be lost.
There's a common "pattern" of running Node in Docker, bind-mounting your application into it, and then mounting an anonymous volume over your node_modules directory. For routine development I've found it vastly simpler to just install Node on my host (it is literally a single apt-get install or brew install command). If you're using this Docker-oriented setup, the anonymous volume for node_modules won't notice that you've changed your node_modules directory, and you have to re-run docker build and delete and recreate your containers.
TLDR: 3, while 2 was going in the right direction.
Official documentation wasn't clear for the first time for me either.
Below is a spin-off step-by-step guide from 3.0.5 to 3.1.5 in docker-compose context.
It tries to follow official documentation as close as possible, but includes a some extra (mandatory in my case) steps.
Upgrade Strapi
Following relates to strapi/strapi (not strapi/base) docker image used via docker-compose
Important! Upgrading Docker image versions DOES NOT upgrade Strapi version.
Strapi NodeJS application builds itself during first startup only, if detects empty folder and is normally stored in mounted volume. See docker-entrypoint.sh.
To upgrade, first follow the guides (general and version-specific) to rebuild actual Strapi NodeJS application. Secondly, update docker tag to match the version to avoid confusion.
Example of upgrading from 3.0.5 to 3.1.5:
# https://strapi.io/documentation/developer-docs/latest/guides/update-version.html
# Make sure your server is not running until the end of the migration
## That is unclear instruction. Stopped Nginx to prevent access to application, without stopping Strapi itself.
docker-compose exec strapi bash # enter running container
## Alternative way would be `docker-compose stop strapi` and manually reconstruct container options using `docker`, overriding entrypoint with `--entrypoint /bin/bash`
# Few checks
yarn strapi version # current version installed
yarn info strapi #npm info strapi#3.1.x version # available versions
yarn --version #npm --version
yarn list #npm list
cat package.json
# Upgrade your dependencies
sed -i 's|"3.0.5"|"3.1.5"|g' package.json && cat package.json
yarn install #npm install
yarn strapi version
# Breaking changes? See version-specific migration guide!
## https://strapi.io/documentation/developer-docs/latest/migration-guide/migration-guide-3.0.x-to-3.1.x.html
## Define the admin JWT Token
## Update username constraint for administrators
docker-compose exec db bash
psql strapi strapi
-- show tables and describe one
\dt
\d strapi_administrator
## Migrate your custom admin panel plugins
# Rebuild your administration panel
rm -rf node_modules # workaround for "Error: Module not found: Error: Can't resolve"
yarn build --clean #npm run build -- --clean
# Extensions?
# Start your application
yarn develop #npm run develop
# Confirm & test, visit URL
# Errors?
## Error: ENOSPC: System limit for number of file watchers reached, ...
# Can be solved by modifying kernel parameter at docker HOST system
sudo vi /etc/sysctl.conf # fs.inotify.max_user_watches=524288
sudo sysctl -p
# Modify docker-compose to reflect version changed and avoid confusion !
docker ps
vi docker-compose.yml # e.g. 3.0.5 > 3.1.5
docker-compose up --force-recreate --no-deps -d strapi
# ... and remove old docker image, when no longer required.
P.S. We may together improve documentation via https://github.com/strapi/documentation. Made a pull request https://github.com/strapi/strapi-docker/pull/276

Source files are updated, but CMD does not reflect

I'm new to docker and am trying to dockerize an app I have. Here is the dockerfile I am using:
FROM golang:1.10
WORKDIR /go/src/github.com/myuser/pkg
ADD . .
RUN curl https://raw.githubusercontent.com/golang/dep/master/install.sh | sh
RUN dep ensure
CMD ["go", "run", "cmd/pkg/main.go"]
The issue I am running into is that I will update source files on my local machine with some log statements, rebuild the image, and try running it in a container. However, the CMD (go run cmd/pkg/main.go) will not reflect the changes I made.
I looked into the container filesystem and I see that the source files are updated and match what I have locally. But when I run go run cmd/pkg/main.go within the container, I don't see the log statements I added.
I've tried using the --no-cache option when building the image, but that doesn't seem to help. Is this a problem with the golang image, or my dockerfile setup?
UPDATE: I have found the issue. The issue is related to using dep for vendoring. The vendor folder had outdated files for my package because dep ensure was pulling them from github instead of locally. I will be moving to go 1.1 which support to go modules to fix this.
I see several things:
According to your Dockerfile
Maybe you need a dep init before dep ensure
Probably you need to check if main.go path is correct.
According to docker philosophy
In my humble opinion, you should create an image with docker build -t <your_image_name> ., executing that where your Dockerfile is, but without CMD line.
I would execute your go run <your main.go> in your docker run -d <your_image_name> go run <cmd/pkg/main.go> or whatever is your command.
If something is wrong, you can check exited containers with docker ps -a and furthermore check logs with docker logs <your_CONTAINER_name/id>
Other way to check logs is access to the container using bash and execute go run manually:
docker run -ti <your_image_name> bash
# go run blablabla

Resources