Objective:
Create a docker volume
cd to that volume to check my contents
$ docker volume create hello_world
$ docker volume ls
DRIVER VOLUME NAME
local hello_world
$ docker inspect hello_world
[
{
"CreatedAt": "2018-11-19T05:37:03Z",
"Driver": "local",
"Labels": {},
"Mountpoint": "/mnt/sda1/var/lib/docker/volumes/hello_world/_data",
"Name": "hello_world",
"Options": {},
"Scope": "local"
}
]
$ cd /mnt
bash: cd: /mnt: No such file or directory
IF I cannot cd to the Mountpoint then where is it being saved?
Also tried this
$ docker-machine ssh default
## .
## ## ## ==
## ## ## ## ## ===
/"""""""""""""""""\___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ / ===- ~~~
\______ o __/
\ \ __/
\____\_______/
_ _ ____ _ _
| |__ ___ ___ | |_|___ \ __| | ___ ___| | _____ _ __
| '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__|
| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__| < __/ |
|_.__/ \___/ \___/ \__|_____\__,_|\___/ \___|_|\_\___|_|
Boot2Docker version 18.06.1-ce, build HEAD : c7e5c3e - Wed Aug 22 16:27:42 UTC 2018
Docker version 18.06.1-ce, build e68fc7a
docker#default:~$ cd /mnt/sda1/var/lib/docker/volumes/hello_world/_data
-sh: cd: can't cd to /mnt/sda1/var/lib/docker/volumes/hello_world/_data
I am using Windows 8, Docker ToolBox
Related
A colleague and I both use Azure DevOps build agents self-hosted in Docker. We're both connecting to separate DevOps organisations, but other than that the agents are the same - we're using the latest version of the image (mcr.microsoft.com/azure-pipelines/vsts-agent), both running on the same host OS, same version of Docker.
My build agent runs jobs and outputs the log data to the Azure DevOps console in a sensible format:
Starting: Initialize job
Agent name: 'AS01'
Agent machine name: '7b20a38350c7'
Current agent version: '2.183.1'
Agent running as: 'root'
Prepare build directory.
Set build variables.
Download all required tasks.
Checking job knob settings.
Finished checking job knob settings.
Start tracking orphan processes.
Finishing: Initialize job
His, however, seems to replace any space character ' ' with triple asterisks:
Starting:***Initialize***job
Agent***name:***'SERVER1'
Agent***machine***name:***'SERVER1'
Current***agent***version:***'2.185.0'
Agent***running***as:***'root'
Prepare***build***directory.
Set***build***variables.
Download***all***required***tasks.
Downloading***task:***DockerInstaller***(0.181.0)
Downloading***task:***Docker***(2.181.2)
Checking***job***knob***settings.
***Knob:***AgentToolsDirectory***=***/opt/hostedtoolcache***Source:***${AGENT_TOOLSDIRECTORY}***
***Knob:***TaskRestrictionsEnforcementMode***=***WarningOnly***Source:***$(agent.taskRestrictionsEnforcementMode)***
***Knob:***EnableTaskRestrictionsTelemetry***=***true***Source:***$(agent.enableTaskRestrictionsTelemetry)***
Finished***checking***job***knob***settings.
Start***tracking***orphan***processes.
Finishing:***Initialize***job
The most obvious difference I can spot is that the agent versions are different; mine is 2.183.2 (but signs on in the container logs as v2.184.2).
___ ______ _ _ _
/ _ \ | ___ (_) | (_)
/ /_\ \_____ _ _ __ ___ | |_/ /_ _ __ ___| |_ _ __ ___ ___
| _ |_ / | | | '__/ _ \ | __/| | '_ \ / _ \ | | '_ \ / _ \/ __|
| | | |/ /| |_| | | | __/ | | | | |_) | __/ | | | | | __/\__ \
\_| |_/___|\__,_|_| \___| \_| |_| .__/ \___|_|_|_| |_|\___||___/
| |
agent v2.184.2 |_| (commit 01983e9)
His claims to be running 2.185.0 and signs on in the container logs as such.
___ ______ _ _ _
/ _ \ | ___ (_) | (_)
/ /_\ \_____ _ _ __ ___ | |_/ /_ _ __ ___| |_ _ __ ___ ___
| _ |_ / | | | '__/ _ \ | __/| | '_ \ / _ \ | | '_ \ / _ \/ __|
| | | |/ /| |_| | | | __/ | | | | |_) | __/ | | | | | __/\__ \
\_| |_/___|\__,_|_| \___| \_| |_| .__/ \___|_|_|_| |_|\___||___/
| |
agent v2.185.0 |_| (commit a01fe8d)
It seems that redeploying the container from the latest image did not cause the use of a newer agent version in my environment -- the image seems to have a startup script that checks which version to download, then downloads and installs it. I even tried running the same image on my environment with the access token and organisation name of my colleague, which caused the image to download 2.185.0 from his tenant. Then I swapped the situation, used his environment and his image with my access token and organisation name, and 2.184.2 was downloaded so it seems the problem isn't specific to our Docker environments, nor the servers running them - it seems to be tied to the DevOps organisation.
Is there possibly a problem with version 2.185.0 of the agent that causes these weird asterisks in the logs? Or, is there a way I can force my agent to use v2.185.0 to see if the asterisks appear that way?
I have successfully jailbreaked my Hue Bridge 2.1 and now have Root Access over SSH to it.
But I don't know how I can Install a Packet Manager (like opkg) on it.
It looks like wget is installed, but nothing else really. Also SCP works.
I have tried everything, but nothing seems to work. (See console). (Full console dump here)
login as: root
root#192.168.1.69's password:
BusyBox v1.23.2 (2018-10-25 16:12:28 UTC) built-in shell (ash)
_ _ _ _ ______ ____ _ _ ___ __ __
| | | | | | | ____| | _ \ (_) | | |__ \ \ \ / /
| |__| | | | | |__ | |_) |_ __ _ __| | __ _ ___ ) | \ V /
| __ | | | | __| | _ <| '__| |/ _` |/ _` |/ _ \ / / > <
| | | | |__| | |____ | |_) | | | | (_| | (_| | __/ / /_ _ / . \
|_| |_|\____/|______| |____/|_| |_|\__,_|\__, |\___| |____(_)_/ \_\
__/ |
|___/
----------------------------------------------------------------------
Version: 1810251352
----------------------------------------------------------------------
root#Wohnzimmer:~# busybox --install opkg
--install: applet not found
root#Wohnzimmer:~# opkg
-ash: opkg: not found
root#Wohnzimmer:~# wget
BusyBox v1.23.2 (2018-10-25 16:12:28 UTC) multi-call binary.
Usage: wget [-c|--continue] [-s|--spider] [-q|--quiet] [-O|--output-document FILE]
[--header 'header: value'] [-Y|--proxy on/off] [-P DIR]
[-U|--user-agent AGENT] URL...
Retrieve files via HTTP or FTP
-s Spider mode - only check file existence
-c Continue retrieval of aborted transfer
-q Quiet
-P DIR Save to DIR (default .)
-O FILE Save to FILE ('-' for stdout)
-U STR Use STR for User-Agent header
-Y Use proxy ('on' or 'off')
You can't just install the packet manager using busybox --install opkg orsudo apt-get install <any-package>.
This below error message clearly says the opkg packet is not found, you need to build the source, and add that in your target.
root#Wohnzimmer:~# busybox --install opkg
--install: applet not found
It's Busybox system for that you need to download the packet manager archive against the target what they are using, for example: if it is ARM then compile using ARM toolchain in your host system then move the compiled binary to the target. After moving the compiled utility you can place the binary in /sbin dir of the root fs.
After Creating Docker Image from Redhat Fuse 7, I went inside container bin folder and executed ./clinet command.
Fuse client instance loaded
karaf#3e06e92ec257:/opt/karaf/bin$ ./client
Logging in as admin
____ _ _ _ _ _____
| _ \ ___ __| | | | | | __ _| |_ | ___| _ ___ ___
| |_) / _ \/ _` | | |_| |/ _` | __| | |_ | | | / __|/ _ \
| _ < __/ (_| | | _ | (_| | |_ | _|| |_| \__ \ __/
|_| \_\___|\__,_| |_| |_|\__,_|\__| |_| \__,_|___/___|
Red Hat Fuse (7.0.0.fuse-000191-redhat-1)
http://www.redhat.com/products/jbossenterprisemiddleware/fuse/
Hit '<tab>' for a list of available commands
and '[cmd] --help' for help on a specific command.
Open a browser to http://localhost:8181/hawtio to access the management console
Hit '<ctrl-d>' or 'shutdown' to shutdown Red Hat Fuse.
admin#root()>
but any key pressed, it returned below error and instance closed
admin#root()> Error executing command: / by zero
Sample Dockerfile and commands
docker run --name fuse -d -p 1099:1099 -p 8101:8101 -p 44444:44444 redhat-fuse-v1
docker exec -it fuse /opt/karaf/bin/client
docker logs -f fuse
FROM openjdk:8-slim
MAINTAINER Jayashankar C <jshankarc1#gmail.com>
ENV KARAF_USER karaf
ENV KARAF_UID 8181
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64
ENV JAVA_MAX_MEM 256m
ENV KARAF_EXEC exec
COPY fuse-karaf-7.0.0.fuse-000191-redhat-1.zip /tmp/fuse-karaf.zip
COPY org.ops4j.pax.url.mvn.cfg /tmp
COPY users.properties /tmp
RUN groupadd -r $KARAF_USER --gid=$KARAF_UID && useradd -rm -g $KARAF_USER --uid=$KARAF_UID $KARAF_USER
RUN mkdir -p /opt/karaf \
&& unzip -o /tmp/fuse-karaf.zip -d /opt/karaf \
&& mv /opt/karaf/fuse-karaf-7.0.0.fuse-000191-redhat-1/* /opt/karaf > /dev/null 2>&1 \
&& ls /opt/karaf -ltr \
&& rm /tmp/fuse-karaf.zip \
&& cp -f /tmp/org.ops4j.pax.url.mvn.cfg opt/karaf/etc \
&& cp -f /tmp/users.properties opt/karaf/etc \
&& cat /opt/karaf/etc/org.ops4j.pax.url.mvn.cfg \
&& mkdir -p /opt/karaf/data /opt/karaf/data/log \
&& chown -R $KARAF_USER.$KARAF_USER /opt/karaf \
&& chmod 700 /opt/karaf/data
EXPOSE 1099 8101 44444
USER $KARAF_USER
CMD ["/opt/karaf/bin/karaf", "run"]
Error log
2018-05-31 18:14:21,757 | ERROR | h console user admin | o.a.k.s.s.ShellUtil | 143 - org.apache.karaf.shell.core - 4.2.0.fuse-000237-redhat-1 | Exception caught while executing command
java.lang.ArithmeticException: / by zero
at org.aesh.readline.Buffer.printInsertedData(Buffer.java:570) ~[141:org.apache.karaf.shell.aesh-jline:4.2.0.fuse-000237-redhat-1]
at org.aesh.readline.Buffer.insert(Buffer.java:227) ~[141:org.apache.karaf.shell.aesh-jline:4.2.0.fuse-000237-redhat-1]
at org.aesh.readline.AeshConsoleBuffer.writeChar(AeshConsoleBuffer.java:128) ~[141:org.apache.karaf.shell.aesh-jline:4.2.0.fuse-000237-redhat-1]
at org.aesh.readline.Readline$AeshInputProcessor.parse(Readline.java:262) ~[141:org.apache.karaf.shell.aesh-jline:4.2.0.fuse-000237-redhat-1]
at org.aesh.readline.Readline$AeshInputProcessor.access$100(Readline.java:174) ~[141:org.apache.karaf.shell.aesh-jline:4.2.0.fuse-000237-redhat-1]
at org.aesh.readline.Readline.readInput(Readline.java:95) ~[141:org.apache.karaf.shell.aesh-jline:4.2.0.fuse-000237-redhat-1]
at org.aesh.readline.Readline.access$1000(Readline.java:57) ~[141:org.apache.karaf.shell.aesh-jline:4.2.0.fuse-000237-redhat-1]
at org.aesh.readline.Readline$AeshInputProcessor.lambda$start$1(Readline.java:333) ~[141:org.apache.karaf.shell.aesh-jline:4.2.0.fuse-000237-redhat-1]
at org.jline.reader.impl.TerminalConnection.openBlocking(TerminalConnection.java:152) ~[141:org.apache.karaf.shell.aesh-jline:4.2.0.fuse-000237-redhat-1]
at org.jline.reader.impl.LineReaderImpl.readInput(LineReaderImpl.java:137) ~[141:org.apache.karaf.shell.aesh-jline:4.2.0.fuse-000237-redhat-1]
at org.jline.reader.impl.LineReaderImpl.readLine(LineReaderImpl.java:275) ~[141:org.apache.karaf.shell.aesh-jline:4.2.0.fuse-000237-redhat-1]
at org.apache.karaf.shell.impl.console.ConsoleSessionImpl.readCommand(ConsoleSessionImpl.java:436) [143:org.apache.karaf.shell.core:4.2.0.fuse-000237-redhat-1]
at org.apache.karaf.shell.impl.console.ConsoleSessionImpl.run(ConsoleSessionImpl.java:394) [143:org.apache.karaf.shell.core:4.2.0.fuse-000237-redhat-1]
at java.lang.Thread.run(Thread.java:748) [?:?]
If you look a bit earlier in the log, you'll find the following event:
852 [sshd-SshClient[3a82f6ef]-nio2-thread-3] DEBUG org.apache.sshd.client.channel.ChannelShell - doOpenPty(ChannelShell[id=0, recipient=0]-ClientSessionImpl[karaf#localhost/127.0.0.1:8101]) Send SSH_MSG_CHANNEL_REQUEST env: {TERM=null, LC_CTYPE=en.UTF-8}
The problem is the TERM=null environment variable which causes the NullPointerException.
You need to run the following command instead:
docker exec -it -e TERM=${TERM} fuse /opt/karaf/bin/client
On older docker versions, you can run the following:
docker exec -it fuse bash -c "export TERM=${TERM} /opt/karaf/bin/client"
Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 6 years ago.
Improve this question
When I map a directory it doesn't show up in my docker container.
I am on docker 1.11.2 on Mac using toolbox.
calloway$ docker -v
Docker version 1.11.2, build b9f10c9
calloway$ ls -ltr /tmp/foo/
total 0
-rw-r--r-- 1 calloway wheel 0 Jun 8 09:21 regularfile.txt
calloway$ docker run -it -v /tmp/foo:/mytmp -w /mytmp ubuntu bash
root#26fc182f7964:/mytmp# ls
root#26fc182f7964:/mytmp# exit
More exploration: /tmp mapped is /tmp on "default" Virtual Machine.
Joshuas-MBP:~ joshuacalloway$ docker-machine ssh default
## .
## ## ## ==
## ## ## ## ## ===
/"""""""""""""""""\___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ / ===- ~~~
\______ o __/
\ \ __/
\____\_______/
_ _ ____ _ _
| |__ ___ ___ | |_|___ \ __| | ___ ___| | _____ _ __
| '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__|
| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__| < __/ |
|_.__/ \___/ \___/ \__|_____\__,_|\___/ \___|_|\_\___|_|
Boot2Docker version 1.10.3, build master : 625117e - Thu Mar 10 22:09:02 UTC 2016
Docker version 1.10.3, build 20f81dd
docker#default:~$ mkdir /tmp/OnDefaultVM
docker#default:~$ touch /tmp/OnDefaultVM/myfile.txt
docker#default:~$ exit
Joshuas-MBP:~ joshuacalloway$ docker run -it -v /tmp/OnDefaultVM:/mytmp -w /mytmp ubuntu bash
root#1184ff43dc88:/mytmp# ls
myfile.txt
I'm using an Ansible playbook to manage installation of Docker containers. I have the following playbook, which installs Cassandra:
I want to run this playbook locally, and have it install into Boot2Docker. I am able to SSH into Boot2Docker using the instructions from this answer:
$ ssh -i $HOME/.ssh/id_boot2docker -p 2022 docker#localhost
## .
## ## ## ==
## ## ## ## ===
/""""""""""""""""\___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ / ===- ~~~
\______ o __/
\ \ __/
\____\______/
_ _ ____ _ _
| |__ ___ ___ | |_|___ \ __| | ___ ___| | _____ _ __
| '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__|
| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__| < __/ |
|_.__/ \___/ \___/ \__|_____\__,_|\___/ \___|_|\_\___|_|
Boot2Docker version 1.4.1, build master : 86f7ec8 - Tue Dec 16 23:11:29 UTC 2014
Docker version 1.4.1, build 5bc2ff8
docker#boot2docker:~$
I made an inventory file with the same SSH settings:
[local]
localhost ansible_ssh_port=2022 ansible_ssh_user=docker ansible_ssh_private_key_file=~/.ssh/id_boot2docker
But when I run the playbook, it fails with the error "/bin/sh: /usr/bin/python: not found":
$ ansible-playbook db-setup.yml -i hosts.local
PLAY [local] ******************************************************************
GATHERING FACTS ***************************************************************
failed: [localhost] => {"failed": true, "parsed": false}
/bin/sh: /usr/bin/python: not found
OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011
debug1: Reading configuration data /etc/ssh_config
debug1: /etc/ssh_config line 20: Applying options for *
debug1: /etc/ssh_config line 102: Applying options for *
debug1: auto-mux: Trying existing master
debug1: mux_client_request_session: master session id: 2
Shared connection to localhost closed.
TASK: [Database] **************************************************************
FATAL: no hosts matched or all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit #/Users/bryan/db-setup.retry
localhost : ok=0 changed=0 unreachable=0 failed=1
I still get the error even if "gather facts" is turned off. If I SSH into Boot2Docker, I can see that /usr/bin/python exists:
$ ssh -i $HOME/.ssh/id_boot2docker -p 2022 docker#localhost
...
docker#boot2docker:~$ which python
boot2docker ssh "tce-load -w -i python.tcz" does the trick as well (you need internet ;-)) for docker and ansible you will need "docker-py"
Setup pip, login to boot2docker
wget https://bootstrap.pypa.io/get-pip.py
python get-pip.py
pip install docker-py
also add to your inventory file:
dockerhost ansible_connection=ssh ansible_ssh_host=192.168.59.103 ansible_ssh_user=docker ansible_ssh_private_key_file=~/.ssh/id_boot2docker ansible_python_interpreter=/usr/local/bin/python
The solution was simple: Python isn't installed by default by Boot2Docker.
To install, run
$ boot2docker ssh "wget http://www.tinycorelinux.net/6.x/x86/tcz/python.tcz && tce-load -i python.tcz && rm -f python.tcz"
I created a script to do this automatically, see
https://gist.github.com/bcattle/90e64fbe808b3409ec2f