Superset dev server not reflecting changes - docker

After cloning the official GitHub repo of Superset I'm run docker-compose -f docker-compose-non-dev.yml up
after that cd superset-frontend && npm run dev-server
I'm able to see running project on the 9000 port
But If I'm making any changes I dot't see any difference.
What I'm doing wrong to start dev environment?

If you're trying to edit any of the backend projects, have you tried running them with docker-compose -f docker-compose.yml up instead of docker-compose -f docker-compose-non-dev.yml up? And if you're trying to edit the project from the superset-frontend directory, have you tried running it with npm run dev instead? I see that npm run dev runs a script with the --watch flag, so that sounds like what you're looking for.

Related

NPM scripts not running sequentially

I'm writing e2e tests with Supertest for my NestJS application and I have a "test:e2e" script which looks like this:
"test:e2e": "nerdctl compose up && dotenv -e .env.test -- jest --no-cache --config ./test/jest-e2e.json && nerdctl compose down"
When I run the command yarn test:e2e, it stops after spinning up my docker container (from the nerdctl compose up command) and it doesn't run my tests or tear the container down. I know the double ampersands && are used to run the scripts sequentially which is my goal here, but I can't seem to figure out why it stops after spinning up my docker container. Maybe perhaps spinning up the container takes too long? Any help is greatly appreciated!
Environment:
macOS v12.6.1
Node v18.12.1
NPM v8.19.2
Silly mistake, I forgot to add the -d option to run my container in the background. Thank you Steven!

update solidity version in docker container

I installed oyente using docker installation as described in the link
https://github.com/enzymefinance/oyente using the following command.
docker pull luongnguyen/oyente && docker run -i -t luongnguyen/oyente
I can analyse older smart contracts but I get compilation error when I try it on newer contracts. I need to update the version of solc but I couldn't.
On the container the current version is
solc, the solidity compiler commandline interface
Version: 0.4.21+commit.dfe3193c.Linux.g++ .
I read that the best way to update it is to use the command npm so I executed the following command but I am getting errors cause I assume npm version is not new also.
docker exec -i container_name bash -c "npm install -g solc"
I would appreciate, cause I am trying to sole this for hours now. Thanks in advance,
Ferda
Docker's standard model is that an image is immutable: it contains a fixed version of your application and its dependencies, and if you need to update any of this, you need to build a new image and start a new container.
The first part of this, then, looks like any other Node package update. Install Node in the unlikely event you don't have it on your host system. Run npm update --save solc to install the newer version and update your package.json and package-lock.json files. This is the same update you'd do if Docker weren't involved.
Then you can rebuild your Docker image with docker build. This is the same command you ran to initially build the image. Once you've created the new image, you can stop, delete, and recreate your container.
# If you don't already have Node, get it
# brew install nodejs
# Update the dependency
npm update --save solc
npm run test
# Rebuild the image
docker build -t image_name .
# Recreate the container
docker stop container_name
docker rm container_name
docker run -d --name container_name image_name
npm run integration
git add package*.json
git commit -m 'update solc version to 0.8.14'
Some common Docker/Node setups try to store the node_modules library tree in an anonymous volume. This can't be easily updated, and hides the node_modules tree that gets built from the image. If you have this setup (maybe in a Compose volumes: block) I'd recommend deleting any volumes or mounts that hide the image contents.
Note that this path doesn't use docker exec at all. Think of this like getting a debugger inside your running process: it's very useful when you need it, but anything you do there will be lost as soon as the process or container exits, and it shouldn't be part of your normal operational toolkit.

chmod in Dockerfile does not permanently change permissions

I have the below Dockerfile to create a nodejs server. It clones a repo that contains the script that starts the node servers but that script needs to be executable.
The first time I start this docker container, it works as expected. I know the chown is doing something because without it, the script "does not exist" because it does not have the x permissions.
However, if I then open a TTY into the container and inspect the file, the permissions have not changed. If the container is restarted, it gives an error that the file does not exist (x permission is missing). If I execute the chmod manually after the first run, I can restart the container without issues.
I have tried it as part of the combined RUN command (as below) and as a separate RUN command, both written normally and as RUN ["chmod", "755" "/opt/youtube4me/start_all.sh"] and both with 755 and +x. All of those ways work the first start but do not permanently change the permissions.
I'm very new to docker and I think I'm missing some basic concept that explains that chmod is run in some sort of context (maybe?)
FROM node:10.19.0-alpine
RUN apk update && \
apk add git && \
git clone https://github.com/0502-crew/youtube4me.git /opt/youtube4me && \
chmod 755 /opt/youtube4me/start_all.sh
COPY config/. /opt/youtube4me/
EXPOSE 45011
WORKDIR /opt/youtube4me
CMD ["/opt/youtube4me/start_all.sh"]
UPDATE 1
I run these commands with docker:
sudo docker build . -t 0502-crew/youtube4me
sudo docker container run -d --name youtube4me --network host 0502-crew/youtube4me
start_all.sh
#!/bin/sh
# Reset to master in case any force pushes were done
git reset --hard origin/master
# Start client server
cd client && npm i && npm run build && nohup npm run prod &
# Start api server
cd api && npm i && npm run build && nohup npm run prod &
# Keep server running by tailing nothing, forever...
tail -f /dev/null
The config folder from which I COPY just contains 2 .ts files that are in .gitignore so need to be added manually:
config/api/src/config/Config.ts
config/client/src/config/Config.ts
UPDATE 2
If I replace the CMD line with CMD ["sh", "-c", "tail -f /dev/null"] then the x permissions remain on the sh file. It seems that executing the script removes the permissions...
As #dennisvandehoef pointed out, the fundamentals of my approach are off. Including a git clone in the Dockerfile will, for example, have as side effect that building the image a second time will not clone the repo again, even if files have changed.
However, I did find a solution to this approach so I might as well share it for the sake of knowledge sharing:
I develop on Windows so I don't need to set permissions to a script but Linux does need it. I found out you can still set the +x permission bit using git on Windows and commit it. When docker clones the repo, chmod is then no longer needed at all.
git update-index --chmod=+x start_all.sh
If you then ls the files with this command
git ls-files --stage
you will see that all other files are 100644 but the sh file is 100755 (so permission 755).
Next just commit it
git commit -m"Made sh executable"
I had to delete my docker images afterwards and rebuild it to ensure the git clone was performed again.
I'll rewrite my dockerfile at a later point but for now it works as intented.
It is still a mystery to me that the x bit disappears when the chmod is part of the Dockerfile
You are building a docker-file for the project you are pulling from git and your start_all script includes a git reset --hard origin/master which is a bad practice since now you cannot create versions of your docker images.
Also you copy these 2 files to the wrong directory. with COPY config/. /opt/youtube4me/ you copy them directly to the rood of your project. and not to the given locations in config/api/src/config/ and config/client/src/config/Config.ts
I realize that fixing these problems will not fix this chmod problem itself. But it might make it go away for your specific use case.
Also if it are secrets you are excluding from git, you also should not add them to your docker-image while building since then (after you pushed it to docker) it will be public again. Therefore it is a good practice to never add them while building.
Did you try something like this:
Docker file
FROM node:10.19.0-alpine
RUN mkdir /opt/youtube4me
COPY . /opt/youtube4me/
WORKDIR /opt/youtube4me
RUN chmod 755 /opt/youtube4me/start_all.sh
EXPOSE 45011
CMD ["/opt/youtube4me/start_all.sh"]
dockerignore
api/src/config/Config.ts
client/src/config/Config.ts
script
# Start client server
cd client && npm i && npm run build && nohup npm run prod &
# Start api server
cd api && npm i && npm run build && nohup npm run prod &
# Keep server running by tailing nothing, forever...
tail -f /dev/null
You then need to run docker run with 2 volumes to add the config to it. This is been done with -v. example: docker run -v $(pwd)config/api/src/config/:api/src/config/ -v $(pwd)config/client/src/config/:client/src/config/
Never the less I wonder why you are running both services in one docker image. If this is just for local execution you can also think about creating 2 separate docker images, and use docker-compose to spawn it all.
Addition 1:
I thought a bit about this and it is also a good practice to use environment variables to config a docker-image instead of adding a file to it. You might want to switch to that as well. I advise reading this article to get a better understanding of the why and other possibilities to do this.
Addition 2:
I created a pull request on your code, that is an example of how it could look with docker-compose. It currently does not build because of the config options. But it will give you some more insights.

How to upgrade Strapi in Docker container?

I launched Strapi with Docker-compose. After reading the Migration Guide, I still don't know if I wanna upgrade to the next version, what method should I choose:
Under to the Strapi project directory, execute npm install strapi#<next version> -g and npm install strapi#<next version> --save
docker exec -it <strapi container> sh, navigate to Strapi project directory, then execute npm install strapi#<next version> -g and npm install strapi#<next version> --save
Neither?
In your local developer tree, update the package version in your package.json file. Run npm install or yarn install locally. Start your application. Verify that it works. Run your tests. Fix any compatibility issues from the upgrade. Do all of this without Docker involved at all.
Re-run docker build . to rebuild your Docker image with the new package dependencies.
Stop the old container, delete it, and run a new container with the new image.
As a general rule you should never install anything in a running container. It's extremely routine to delete containers, and when you do, anything in the container will be lost.
There's a common "pattern" of running Node in Docker, bind-mounting your application into it, and then mounting an anonymous volume over your node_modules directory. For routine development I've found it vastly simpler to just install Node on my host (it is literally a single apt-get install or brew install command). If you're using this Docker-oriented setup, the anonymous volume for node_modules won't notice that you've changed your node_modules directory, and you have to re-run docker build and delete and recreate your containers.
TLDR: 3, while 2 was going in the right direction.
Official documentation wasn't clear for the first time for me either.
Below is a spin-off step-by-step guide from 3.0.5 to 3.1.5 in docker-compose context.
It tries to follow official documentation as close as possible, but includes a some extra (mandatory in my case) steps.
Upgrade Strapi
Following relates to strapi/strapi (not strapi/base) docker image used via docker-compose
Important! Upgrading Docker image versions DOES NOT upgrade Strapi version.
Strapi NodeJS application builds itself during first startup only, if detects empty folder and is normally stored in mounted volume. See docker-entrypoint.sh.
To upgrade, first follow the guides (general and version-specific) to rebuild actual Strapi NodeJS application. Secondly, update docker tag to match the version to avoid confusion.
Example of upgrading from 3.0.5 to 3.1.5:
# https://strapi.io/documentation/developer-docs/latest/guides/update-version.html
# Make sure your server is not running until the end of the migration
## That is unclear instruction. Stopped Nginx to prevent access to application, without stopping Strapi itself.
docker-compose exec strapi bash # enter running container
## Alternative way would be `docker-compose stop strapi` and manually reconstruct container options using `docker`, overriding entrypoint with `--entrypoint /bin/bash`
# Few checks
yarn strapi version # current version installed
yarn info strapi #npm info strapi#3.1.x version # available versions
yarn --version #npm --version
yarn list #npm list
cat package.json
# Upgrade your dependencies
sed -i 's|"3.0.5"|"3.1.5"|g' package.json && cat package.json
yarn install #npm install
yarn strapi version
# Breaking changes? See version-specific migration guide!
## https://strapi.io/documentation/developer-docs/latest/migration-guide/migration-guide-3.0.x-to-3.1.x.html
## Define the admin JWT Token
## Update username constraint for administrators
docker-compose exec db bash
psql strapi strapi
-- show tables and describe one
\dt
\d strapi_administrator
## Migrate your custom admin panel plugins
# Rebuild your administration panel
rm -rf node_modules # workaround for "Error: Module not found: Error: Can't resolve"
yarn build --clean #npm run build -- --clean
# Extensions?
# Start your application
yarn develop #npm run develop
# Confirm & test, visit URL
# Errors?
## Error: ENOSPC: System limit for number of file watchers reached, ...
# Can be solved by modifying kernel parameter at docker HOST system
sudo vi /etc/sysctl.conf # fs.inotify.max_user_watches=524288
sudo sysctl -p
# Modify docker-compose to reflect version changed and avoid confusion !
docker ps
vi docker-compose.yml # e.g. 3.0.5 > 3.1.5
docker-compose up --force-recreate --no-deps -d strapi
# ... and remove old docker image, when no longer required.
P.S. We may together improve documentation via https://github.com/strapi/documentation. Made a pull request https://github.com/strapi/strapi-docker/pull/276

Source files are updated, but CMD does not reflect

I'm new to docker and am trying to dockerize an app I have. Here is the dockerfile I am using:
FROM golang:1.10
WORKDIR /go/src/github.com/myuser/pkg
ADD . .
RUN curl https://raw.githubusercontent.com/golang/dep/master/install.sh | sh
RUN dep ensure
CMD ["go", "run", "cmd/pkg/main.go"]
The issue I am running into is that I will update source files on my local machine with some log statements, rebuild the image, and try running it in a container. However, the CMD (go run cmd/pkg/main.go) will not reflect the changes I made.
I looked into the container filesystem and I see that the source files are updated and match what I have locally. But when I run go run cmd/pkg/main.go within the container, I don't see the log statements I added.
I've tried using the --no-cache option when building the image, but that doesn't seem to help. Is this a problem with the golang image, or my dockerfile setup?
UPDATE: I have found the issue. The issue is related to using dep for vendoring. The vendor folder had outdated files for my package because dep ensure was pulling them from github instead of locally. I will be moving to go 1.1 which support to go modules to fix this.
I see several things:
According to your Dockerfile
Maybe you need a dep init before dep ensure
Probably you need to check if main.go path is correct.
According to docker philosophy
In my humble opinion, you should create an image with docker build -t <your_image_name> ., executing that where your Dockerfile is, but without CMD line.
I would execute your go run <your main.go> in your docker run -d <your_image_name> go run <cmd/pkg/main.go> or whatever is your command.
If something is wrong, you can check exited containers with docker ps -a and furthermore check logs with docker logs <your_CONTAINER_name/id>
Other way to check logs is access to the container using bash and execute go run manually:
docker run -ti <your_image_name> bash
# go run blablabla

Resources