Running installer within docker file without user interaction - docker

I've been trying to have a docker file setup where it can install a specific ODBC driver I need in an application.
I use the following commands:
RUN cd /tmp/./client1201/
RUN ./setup
and it runs the installer without any problem. The issue is that It requires user input in order to proceed with some steps.
Is there any way I can make this silent? If so, is this some docker specific feature? Or it actually requires some sort of support from the installer itself in order to achieve this?
Thanks for any assistance

RUN echo "yes" | ./script.sh
This may work for you.

Related

Script to automate timeshift backup and azuracast update

I’m running an Azuracast docker instance on Linode and want to try to find a way to automate my updates. Right now my routine is when I notice there are updates by accessing the Azuracast web panel, I usually run timeshift to create a backup using the following command
timeshift —-create —-comment “azuracast update ”
And then I use the following to update azuracast
cd /var/azuracast/
./docker.sh update-self
./docker.sh update
Then it asks me to ensure the azuracast installation is backed up before updating, to which i would usually just press enter.
After that is completed, it asks me if i want to clean up all stopped docker containers and images to save space, which i usually say no to.
What I’m wondering is if there is a way to create a bash script, or python or something to automate all of this, and then have it run on a schedule?
Sure, you can write a shell script to execute these commands and then run it on a schedule using crontab(5).
For example your script might look like:
#! /bin/sh
# Backup azuracast and restart docker container
timeshift --create --comment “azuracast update” && \
cd /var/azuracast/ && \
./docker.sh update-self && \
(yes | ./docker.sh update)
It sounds like this docker.sh program takes some user inputs. See if there are options you can pass to it that will allow you to run it non-interactively. (Seems there isn't, see edit.)
To setup your cron job, you can put the script in /etc/cron.hourly, /etc/cron.daily, /etc/cron.weekly, or /etc/cron.monthly. Or if you need more control, you can get started configuring a cron job with crontab -e. Better explanation.
EDIT: Assuming this is the script you're using, it doesn't seem to have a way to run update non-interactively. Fear not though, there's a program for this: yes(1). This will answer yes to both of the questions, but honestly running docker system prune -f is probably a good idea. If you really want to answer no to that, you could probably substitute yes for printf "y\nn" to answer yes to the first and no to the second.
Also note that there's at least one other y/n question it could ask you, which you probably want to answer yes to.

conemu pass env var to WSL bash terminal

I'm trying to get a task defined in ConEmu to run multiple instance of Ubuntu bash using the WSL layer of Windows 10.
I followed the examples to set up a task to split the UI the way I want, and that part works great. My problem is that I'm trying to use environment variables to pass through commands to run after logging in, and I want different things to run in each panel.
Here is the task command I'm using:
set "STARTUP_CMD='gfp && make server' " & set "PATH=%ConEmuBaseDirShort%\wsl;%PATH%" & %ConEmuBaseDirShort%\conemu-cyg-64.exe --wsl -cur_console:p -cur_console:d:C:\xxx\yyy
On the Linux side I have code in my ~/.bash_aliases file that looks for the STARTUP_CMD env var and tries to execute it. I found code that can pull env vars from the Windows side, which is where the 'set' commands appear to be storing things. Problem is, Windows doesn't know what to do with these, and it tries to expand them when they are read, so it all blows up.
I had this working before, but had to wipe and rebuild my machine recently, and unfortunately didn't have the working command backed up anywhere.
I thought this was the recommended way to run bash with WSL, but I would rather have a way to send stuff directly to the Linux layer as env vars (or if someone has a better way to queue up different commands for each pane, I'm all for that too). Any help would be much appreciated.
Thanks!
Oh course I find the answer right after posting the question... posting here to help others that hit the same issue (or my future self if I forget and have to wipe my machine again).
set "PATH=%ConEmuBaseDirShort%\wsl;%PATH%" & %ConEmuBaseDirShort%\conemu-cyg-64.exe --wsl -eSTARTUP_CMD="gfp && make server" -cur_console:p -cur_console:d:C:\xxx\yyy
You just have to prefix the env var you want with -e and pass it as a param to conemu-cyg. It goes through without any modification on the Windows side and you can read it just like any other env var on the Linux side.

How can users get bazel-run.sh?

bazel run typically occupies the Bazel server, blocking other commands.
https://github.com/bazelbuild/bazel/blob/c484f19a2cf7427887d6e4c71c8534806e1ba83e/scripts/bazel-run.sh is a fantastic replacement
Question: what's a good way for end-users to get hold of that shell script and add to their path? Can we make that part of the bazel install?
I tried ls -R $(bazel info install_base) | grep bazel-run but no luck there.
Bazel run is a good replacement for end-user to run a Bazel command if you need to run interactively or multiple command (#2337). There has been no need for us to consider it as an installation script.
Please file an issue on Github to discuss the possibility of installing it along with Bazel.

interactive docker build from dockerfile?

I want to use a Dockerfile to build an image. However, commands will need user input as they run. Currently, the build is not successful because docker exits on user input. I know I can use the -i -t options on docker run command but I want to do that on a Dockerfile. How is that possible?
You can try with expect or a similar tool.
The easiest way to configure it is using the autoexpect tool, which lets you run the commands interactively and creates an expect script for you.
I couldn't get the rvmsudo stuff working (I haven't used it and didn't want to spend too much time with it) so I decided to use vi instead. First run autoexpect
$ autoexpect vi test
This will open vi and you can create or edit the file and save it. After exiting the vi you'll see your file test as well as an expect script script.exp.
You can then remove the test file and execute script.exp. It will recreate the same file using the same steps.
The autoexpect tool is great, but you may have to create a script from scratch if you need to have more control over what happens. E.g. if you don't want the script to work with the exact expected input.

Do I need to re-make and re-install couchdb everytime I want to test a change to the source?

I am trying to contribute more with couchdb code, but I have really no idea how it is done the right way.
I have cloned the source from apache git repository and built it with
./configure
make && sudo make install
Then I wanted to change a file from the source called couch_httpd_show.erl
Do I need to run make && sudo make install again for every change I make to the source code and want to see how it behaves?
I am sure there's a more practical way to do it, because this approach is a bit time and patience consuming right?
Yes, there is a shortcut.
./configure
make dev
./utils/run
This builds and runs CouchDB entirely in the current directory. Instead of running as a background daemon, CouchDB will run in the foreground and output log messages to the terminal. It uses some local directories to store stuff: ./tmp/log for logs, ./tmp/lib for databases, and (if I remember correctly) ./etc/couch/local_dev.ini for configuration.
If you run this instead:
./utils/run -i
then you will also have an interactive Erlang prompt, which you can use to help debug.
When I work on CouchDB, I run this in the shell:
make dev && ./utils/run -i
After I change some code, I press ^C, up-arrow, return.
When I joined Couchio, I was responsible for production CouchDB deployments. I asked Chris Anderson for advice about something and he said, "Sorry, ask Jan. I've been just using utils/run for years!"
You can rebuild that one file and drop the output beam in place and restart.
erlc <file.erl>
& then copy the .beam file into place. To restart couchdb use either init:restart(). in the erlang shell or POST /_restart to CouchDB.
Although you might want to consider using the commandline erlang & javascript test suite also to ensure you didn't break anything.

Resources