Permisson Denied when tried to copied file from location to cointainer - docker

I'm follow steps in Here to set up a distributed test by Jmeter but in copy my local jmeter test into the master container I got a permission denied error, specifically
sh: 2: /jmeter/apache-jmeter-3.3/bin/: Permission denied

I'm not clear on what you're trying to do.
If you're trying to copy a file from your Host to Docker container, why not just mount the file/directory in to the container during runtime using --mount or -v. For example: docker run -v <local path>:<dst path on docker container> <ImageName>
Edit: This works between multiple containers as well. You can use SharedVolumes to share storage between 2 or more containers. Read more here: https://docs.docker.com/storage/volumes/

Execute the following commands:
docker exec -t master chmod +x /jmeter/apache-jmeter-3.3/bin/jmeter.sh
docker exec -t slave01 chmod +x /jmeter/apache-jmeter-3.3/bin/jmeter.sh
etc.
This will make jmeter.sh script executable via chmod command
Also be aware that according to JMeter Best Practices you should always be using the latest version of JMeter so consider upgrading to JMeter 5.1 (or whatever is the latest version available at JMeter Downloads page) on next available opportunity.

Related

racadm - ERROR: Specified file <filename> does not exist

I'm trying to run racadm both in Windows Powershell using the official utility and on my Mac using this Docker container. In both instances, I can pull the RAC details, so I know my login and password are valid, but when I try to perform an sslkeyupload, I get the following error:
ERROR: Specified file file.pem does not exist.
The permissions on the file, at least on my Mac, are wide open (chmod 777) and are in the same directory I'm trying to run the script in:
docker run stackbot/racadm -r 10.10.1.4 -u root -p calvin sslkeyupload -t 1 -f ./file.pem
Anyone see anything obvious I may be doing wrong?
You're running the command inside a Docker container. It has no visibility to your local filesystem unless you explicitly expose your directory inside the container using the -v command line option:
docker run -v $PWD:$PWD -w $PWD ...
The -v option creates a bind mount, and the -w option sets the working directory.

code changes in the docker container as a root user

I'm experimenting with the docker concept for C++ development. I wrote a Dockerfile that includes instructions for installing all of the necessary libraries and tools. CMake is being used to build C++ code and install binaries. Because of a specific use case, the binaries should be executed as root user. I'm running the docker container with the following command.
docker run -it --rm -u 0 -v $(pwd):/mnt buildmcu:focal
The issue now is that I am a root user inside the Docker container, and if I make any code changes/create a new file inside the Docker container, I am getting a permission error on the host machine If I try to access it.I need to run the sudo chmod ... to change the permissions.Is there any way to allow the source modification in docker container and host machine without permission error?

How to run Servicemix commands inside docker in Karaf console using deploy script?

Colleagues, it so happened that I am now using the servicemix 7.0 technology in one old project. There are several commands that I run manually.
build image servicemix
docker build --no-cache -t servicemix-http:latest .
start container and copy data from local folder source and folder .m
docker run --rm -it -v %cd%:/source -v %USERPROFILE%/.m2:/root/.m2 servicemix-http
start console servicemix and run command
feature:repo-add file:/../source/source/transport/feature/target/http-feature.xml
run command
feature:install http-feature
copy deploy files from local folder to deploy folder into servicemix
docker cp /configs/. :/deploy
update image servicemix
docker commit servicemix-http
Now I describe the gitlab-ci.yml for deployment.
And the question concerns the servicemix commands that were launched from the karaf console.
feature:repo-add
feature:install
Is there any way to script them?
If all you need is to install features from specific feature repositories on startup you can add the features and feature-repositories to /etc/org.apache.karaf.features.cfg.
You can use ssh with private-key to pass commands to Apache karaf.
ssh karaf#localhost -p 8101 -i ~/.ssh/karaf_private_key -- bundle:list
You can also try running commands through Karaf_home/bin/client, albeit had no luck using it with Windows 10 ServiceMix 7.0.1 when I tried. It kept throwing me NullReferenceException so guessing my java installation is missing some security related configuration.
Works well when running newer Karaf installations on docker though and can be used through docker exec as well.
# Using password - (doesn't seem to be an option for servicemix)
docker exec -it karaf /opt/apache-karaf/bin/client -u karaf -p karaf -- bundle:list
# Using private-key
docker exec -it karaf /opt/apache-karaf/bin/client -u karaf -k /keys/karaf_key -- bundle:list

No permissions to create screenshot directory when using docker image testcafe/testcafe

I'm using https://hub.docker.com/r/testcafe/testcafe/
to run our Testcafe project and it works except that on failure the screenshot directory cannot be created due to:
Error: EACCES: permission denied, mkdir '/screenshots'
Is it possible to make this work, am I missing something?
I have tried:
--screenshots ./screenshots
and:
--screenshots {full path to directory}/screenshots
How do I give access to this docker container for writing to a directory on the host machine for future reference?
The simplest solution is creating a screenshots directory on your Docker host, configuring correct permissions and passing this directory to the container as a volume. You can use the following commands as a reference:
mkdir screenshots
chmod a=rwx screenshots
docker run -it --rm -v ./tests:/tests -v ./screenshots:/screenshots testcafe/testcafe firefox /tests --screenshots /screenshots

Docker on Windows getting "Could not locate Gemfile"

I'm trying to learn Docker using Windows as the host OS to create a container using Rails image from Docker Hub.
I've created a Dockerfile with the content below and an empty Gemfile, however I'm still getting the error "Could not locate Gemfile".
Dockerfile
FROM rails:4.2.6
The commands I used are the following (not understanding what they actually do though):
ju.oliveira#br-54 MINGW64 /d/Juliano/ddoc
$ docker build -t ddoc .
Sending build context to Docker daemon 4.608 kB
Step 1 : FROM rails:4.2.6
---> 3fc52e59c752
Step 2 : MAINTAINER Juliano Nunes
---> Using cache
---> d3ab93260f0f
Successfully built d3ab93260f0f
SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.
$ docker run --rm -v "$PWD":/usr/src/app -w /usr/src/app ruby:2.1 bundle install
Unable to find image 'ruby:2.1' locally
2.1: Pulling from library/ruby
fdd5d7827f33: Already exists
a3ed95caeb02: Pull complete
0f35d0fe50cc: Already exists
627b6479c8f7: Already exists
67c44324f4e3: Already exists
1429c50af3b7: Already exists
f4f9e6a0d68b: Pull complete
eada5eb51f5d: Pull complete
19aeb2fc6eae: Pull complete
Digest: sha256:efc655def76e69e7443aa0629846c2dd650a953298134a6f35ec32ecee444688
Status: Downloaded newer image for ruby:2.1
Could not locate Gemfile
So, my questions are:
Why it can't find the Gemfile if it's in the same directory as the Dockerfile?
What does the command docker run --rm -v "$PWD":/usr/src/app -w /usr/src/app ruby:2.1 bundle install do?
How do I set a folder in my host file system to be synced to the container (I'm trying to create a development environment for Rails projects using Docker on Windows)?
I don't know if this makes any difference, but I'm running this command from the bash executed via "Docker Quickstart Terminal" shortcut. I think all it does is run these commands in a default VM, though I could create a new one (but I don't know if I should do this).
Thank you and sorry for all these questions, but right know Docker seems very confusing for me.
You must mount HOST directory somewhere inside your HOME directory (e.g. c:/Users/john/*)
$PWD will give you a Unix-like path. If your shell is like Cygwin, it will look like /cygdrive/c/Users/... or something funny. However, Docker and VirtualBox is a Windows executable, so they expect a plain Windows path. However it seems Docker cannot accept a Windows path in the -v command line, so it is converted to /c/Users/.... The other people may be right; you may not be able to access a directory outside your home for some reason (but I wouldn't know why). To solve your problem, create a junction within your home that points to the path you want, then mount that path in your home.
>mklink /j \users\chloe\workspace\juliano \temp
Junction created for \users\chloe\workspace\juliano <<===>> \temp
>docker run -v /c/Users/Chloe/workspace/juliano:/app IMAGE-NAME ls
007.jpg
...
In your case that would be
mklink /j C:\Users\Juliano\project D:\Juliano\ddoc
docker run -v /c/Users/Juliano/project:/usr/src/app -w /usr/src/app ruby:2.1 bundle install
I don't know what --rm does. I assume -w sets the working directory. -v sets the volume mount and maps the host path to the container path. ruby:2.1 uses the Docker standard Ruby 2.1 image. bundle install run Bundler!

Resources