Example when I use the command (m) to display the contents of the file, I get a message " 'ls' is not recognized as an internal or external command. "
There are also other commands, such as a cat and rm
Related
Running
docker run -t -i -w=[absolute_work_dir] [docker_image] [executable]
is fine. However when I set the workdir using variable in a PowerShell script (.ps1):
$WORKDIR = [absolute_work_dir]
docker run -t -i -w=$WORKDIR [docker_image] [executable]
it gave the error:
docker: Error response from daemon: the working directory '$WORKDIR' is invalid, it needs to be an absolute path.
What is possibly wrong?
You could assemble a string with your variable, then execute the string as a command.
$DOCKRUNSTR = "docker run -t -i -w=" + $WORKDIR + " [docker_image] [executable]"
& $DOCKRUNSTR
The & will tell powershell to run that string as a command.
Edit: Inside a .ps1 file try Invoke-Expression instead of &
Maybe there's a better powershell super user solution, but this seems like it could get the job done for you.
I have a legacy Docker application I'm working with that uses multiple Celery workers. There is a long running process I need to track. I'm able to write data to a file that is visible from the CLI interface of the worker thread:
I'm writing to the file like this:
def log(msg):
now = datetime.now()
dt_string = now.strftime("%Y-%m-%d %H:%M:%S")
fu.mkdirs(defs.LRP_LOG_DIR)
fu.append_string_to_file(dt_string + ": " + msg + "\n", defs.LRP_LOG_FILE)
def append_string_to_file(string, file_path):
with open(file_path, "a") as text_file:
text_file.write(string)
LRP_LOG_DIR = "/opt/project/backend"
LRP_LOG_FILE = LRP_LOG_DIR + "/lrp-log.txt"
The question is: If I add multiple Celery workers will they each write to their own file (not the desired behaviory) or will they all write to a common /opt/project/backend/lrp-log.txt file (the desired behavior)?
If they don't write to a common file, what do I need to do to get multiple Celery workers to write to the same file?
Also, it would be nice if this file was available on the host file system (I'm running on a Windows machine).
I ended up writing a couple of .sh scripts for Cygwin (I'm on windows). I would like to get the tail to work in the same script but this is good enough for now.
Script to start Docker and write to log file
echo
echo
echo
# STOP CONTAINERS
echo "Stopping all Containers..."
docker kill $(docker ps -q)
# DELETE CONTAINERS
echo "Deleting Containers..."
docker rm $(docker ps -aq)
echo
# PRUNE VOLUMES
echo "Pruning orphaned volumes"
docker volume prune -f
echo
# CREATE LOG DIR
mkdir ./logs
# DELETE OLD FULL LOG FILE
echo "Deleting old full log file..."
touch ./logs/full-log.txt
rm ./logs/full-log.txt
touch ./logs/full-log.txt
# SET UP LRP LOG FILE
echo "Deleting old lrp log file..."
touch ./logs/lrp-log.txt
rm ./logs/lrp-log.txt
# TAIL THE LOG FILE (display the running process in a cygwin window)
cygstart tail -f ./logs/full-log.txt
cygstart tail -f ./logs/lrp-log.txt
# START AES
echo "Starting anonlink entity service (aes)..."
echo "Process is running and writing log to ./full-log.txt"
echo "Long Running Process Log (LRP) is being written to lrp-log.txt"
echo "! ! ! DO NOT CLOSE THIS WINDOW ! ! !"
echo "(<ctrl-c> to quit the process)"
docker-compose -p anonlink -f ../tools/docker-compose.yml up --remove-orphans > ./logs/full-log.txt
echo
echo
echo "Done."
echo
echo
Script to create truncated log file to track long running processes
tail -f ./logs/full-log.txt | grep --line-buffered "LOG_FILE:" > ./logs/lrp-log.txt
0
I am trying to run this command and getting an error:
docker exec 19eca917c3e2 cat "Hi there" > /usr/share/ngnix/html/system.txt
/usr/share/ngnix/html/system.txt: No such file or directory
A very simple command to create a file and write in it, I tried echo and that one too didn't work.
The cat command only works on files, so cat "Hi there" is incorrect.
Try echo "Hi there" to output this to standard out.
You are then piping the output to /usr/share/ngnix/html/system.txt. Make sure the directory /usr/share/ngnix/html/ exists. If not create it using
mkdir -p /usr/share/ngnix/html
I presume you are trying to create the file in the container.
You have several problems going on, one of which #Yatharth Ranjan has addressed - you want echo not cat for that use.
The other is, your call is being parsed by the local shell, which is breaking it up into docker ... "hello world" and a > ... system.txt on your host system.
To get the pipe into file to be executed in the container, you need to explicity invoke bash in the container, and then pass it the command:
docker exec 12345 /bin/sh -c "echo \"hello world\" > /usr/somefile.txt"
So, here you would call /bin/sh in the container, pass it -c to tell it a shell command follows, and then the command to parse and execute is your echo "hello world" > the_file.txt.
Of course, a far easier way to copy files into a container is to have them on your host system and then copy them in using docker cp: (where 0123abc is your container name or id)
docker cp ./some-file.txt 01234abc:/path/to/file/in/container.txt
Im trying to run my .sh scipt status.sh via a telegram message:
Ubuntu 20.04.1 LTS server
Telegram-cli with a lua script to action status.sh script
when i send the message "status" to my server via telegram it actions the status.sh script, in this script i have a bunch of stuff that gathers info for me and sends it back to telegram so i can see what the status of my server is, however (i recently did a fresh install of the server) for some reason if the script has a line of code starting with sudo i get:
line 38: /usr/bin/sudo: Permission denied
if i run the script from the command line ./status.sh it runs without any problem!? so im thinking its because it is being called from telegram or lua!?
example of code that generates the error: sudo ifconfig enp0s25 >> file
on the other hand this line works without a problem:
sudo echo Time: $(date +"%H:%M:%S") > file
/usr/bin has 0755 permission set
sudo has 4755 permission set
The following command
sudo ifconfig enp0s25 >> file
would not work if file requires root privilege to be modified.
sudo affects ifconfig but not the redirection.
To fix it:
sudo sh -c 'ifconfig enp0s25 >> file'
As mentioned in Egor Skriptunoff's answer, sudo only affects the command being run with sudo, and not the redirect.
Perhaps nothing is being written to file in your case because ifconfig is writing the output you are interested in to stderr instead of to stdout.
If you want to append both stdout and stderr to file as root, use this command:
sudo sh -c 'ifconfig enp0s25 >> file 2>&1'
Here, sh is invoked via sudo so that the redirect to file will be done as root.
Without the 2>&1, only ifconfig's stdout will be appended to file. The 2>&1 tells the shell to redirect stderr to stdout.
If file can be written to without root, this may simplify to
sudo ifconfig enp0s25 >> file 2>&1
I have a docker container writing logfiles to a name volume.
From the host I want to analyce the logfiles and search for given log messages. But when I access the folder which 'docker inspect VOLUMNAME' gives, I get strange behavior, which I do not understand.
e.g. following command does give empty lines as output:
user#docker-host-01:~/docker-server-env/otaya-designdb$ sudo bash -c "for logfile in /var/lib/docker/volumes/design-db-logs/_data/*/*; do echo ${logfile}; done"
user#docker-host-01:~/docker-server-env/otaya-designdb$
What could be the reason?
Your local shell is expanding the variable expansion inside the double quotes before the loop happens. Change the double quotes to single quotes.
That is, when you run
sudo bash -c "for ... ; do echo ${logfile}; done"
first your local shell replaces the variable reference with whatever your local environment has set for $logfile, probably nothing
sudo bash -c 'for ...; do echo ; done'
and then it runs that command. If you change this to single quotes initially
sudo bash -c 'for ... ; do echo ${logfile}; done'
it will avoid this expansion.
You can see this just by putting the word echo at the front of the command: the shell will do its expansion, and then echo will print out the command that would have run.