Packet Manager on Hue bridge - openwrt

I have successfully jailbreaked my Hue Bridge 2.1 and now have Root Access over SSH to it.
But I don't know how I can Install a Packet Manager (like opkg) on it.
It looks like wget is installed, but nothing else really. Also SCP works.
I have tried everything, but nothing seems to work. (See console). (Full console dump here)
login as: root
root#192.168.1.69's password:
BusyBox v1.23.2 (2018-10-25 16:12:28 UTC) built-in shell (ash)
_ _ _ _ ______ ____ _ _ ___ __ __
| | | | | | | ____| | _ \ (_) | | |__ \ \ \ / /
| |__| | | | | |__ | |_) |_ __ _ __| | __ _ ___ ) | \ V /
| __ | | | | __| | _ <| '__| |/ _` |/ _` |/ _ \ / / > <
| | | | |__| | |____ | |_) | | | | (_| | (_| | __/ / /_ _ / . \
|_| |_|\____/|______| |____/|_| |_|\__,_|\__, |\___| |____(_)_/ \_\
__/ |
|___/
----------------------------------------------------------------------
Version: 1810251352
----------------------------------------------------------------------
root#Wohnzimmer:~# busybox --install opkg
--install: applet not found
root#Wohnzimmer:~# opkg
-ash: opkg: not found
root#Wohnzimmer:~# wget
BusyBox v1.23.2 (2018-10-25 16:12:28 UTC) multi-call binary.
Usage: wget [-c|--continue] [-s|--spider] [-q|--quiet] [-O|--output-document FILE]
[--header 'header: value'] [-Y|--proxy on/off] [-P DIR]
[-U|--user-agent AGENT] URL...
Retrieve files via HTTP or FTP
-s Spider mode - only check file existence
-c Continue retrieval of aborted transfer
-q Quiet
-P DIR Save to DIR (default .)
-O FILE Save to FILE ('-' for stdout)
-U STR Use STR for User-Agent header
-Y Use proxy ('on' or 'off')

You can't just install the packet manager using busybox --install opkg orsudo apt-get install <any-package>.
This below error message clearly says the opkg packet is not found, you need to build the source, and add that in your target.
root#Wohnzimmer:~# busybox --install opkg
--install: applet not found
It's Busybox system for that you need to download the packet manager archive against the target what they are using, for example: if it is ARM then compile using ARM toolchain in your host system then move the compiled binary to the target. After moving the compiled utility you can place the binary in /sbin dir of the root fs.

Related

Self-hosted Azure DevOps build agent in Docker yields unusual log formatting

A colleague and I both use Azure DevOps build agents self-hosted in Docker. We're both connecting to separate DevOps organisations, but other than that the agents are the same - we're using the latest version of the image (mcr.microsoft.com/azure-pipelines/vsts-agent), both running on the same host OS, same version of Docker.
My build agent runs jobs and outputs the log data to the Azure DevOps console in a sensible format:
Starting: Initialize job
Agent name: 'AS01'
Agent machine name: '7b20a38350c7'
Current agent version: '2.183.1'
Agent running as: 'root'
Prepare build directory.
Set build variables.
Download all required tasks.
Checking job knob settings.
Finished checking job knob settings.
Start tracking orphan processes.
Finishing: Initialize job
His, however, seems to replace any space character ' ' with triple asterisks:
Starting:***Initialize***job
Agent***name:***'SERVER1'
Agent***machine***name:***'SERVER1'
Current***agent***version:***'2.185.0'
Agent***running***as:***'root'
Prepare***build***directory.
Set***build***variables.
Download***all***required***tasks.
Downloading***task:***DockerInstaller***(0.181.0)
Downloading***task:***Docker***(2.181.2)
Checking***job***knob***settings.
***Knob:***AgentToolsDirectory***=***/opt/hostedtoolcache***Source:***${AGENT_TOOLSDIRECTORY}***
***Knob:***TaskRestrictionsEnforcementMode***=***WarningOnly***Source:***$(agent.taskRestrictionsEnforcementMode)***
***Knob:***EnableTaskRestrictionsTelemetry***=***true***Source:***$(agent.enableTaskRestrictionsTelemetry)***
Finished***checking***job***knob***settings.
Start***tracking***orphan***processes.
Finishing:***Initialize***job
The most obvious difference I can spot is that the agent versions are different; mine is 2.183.2 (but signs on in the container logs as v2.184.2).
___ ______ _ _ _
/ _ \ | ___ (_) | (_)
/ /_\ \_____ _ _ __ ___ | |_/ /_ _ __ ___| |_ _ __ ___ ___
| _ |_ / | | | '__/ _ \ | __/| | '_ \ / _ \ | | '_ \ / _ \/ __|
| | | |/ /| |_| | | | __/ | | | | |_) | __/ | | | | | __/\__ \
\_| |_/___|\__,_|_| \___| \_| |_| .__/ \___|_|_|_| |_|\___||___/
| |
agent v2.184.2 |_| (commit 01983e9)
His claims to be running 2.185.0 and signs on in the container logs as such.
___ ______ _ _ _
/ _ \ | ___ (_) | (_)
/ /_\ \_____ _ _ __ ___ | |_/ /_ _ __ ___| |_ _ __ ___ ___
| _ |_ / | | | '__/ _ \ | __/| | '_ \ / _ \ | | '_ \ / _ \/ __|
| | | |/ /| |_| | | | __/ | | | | |_) | __/ | | | | | __/\__ \
\_| |_/___|\__,_|_| \___| \_| |_| .__/ \___|_|_|_| |_|\___||___/
| |
agent v2.185.0 |_| (commit a01fe8d)
It seems that redeploying the container from the latest image did not cause the use of a newer agent version in my environment -- the image seems to have a startup script that checks which version to download, then downloads and installs it. I even tried running the same image on my environment with the access token and organisation name of my colleague, which caused the image to download 2.185.0 from his tenant. Then I swapped the situation, used his environment and his image with my access token and organisation name, and 2.184.2 was downloaded so it seems the problem isn't specific to our Docker environments, nor the servers running them - it seems to be tied to the DevOps organisation.
Is there possibly a problem with version 2.185.0 of the agent that causes these weird asterisks in the logs? Or, is there a way I can force my agent to use v2.185.0 to see if the asterisks appear that way?

Docker cannot `cd` to volume dir

Objective:
Create a docker volume
cd to that volume to check my contents
$ docker volume create hello_world
$ docker volume ls
DRIVER VOLUME NAME
local hello_world
$ docker inspect hello_world
[
{
"CreatedAt": "2018-11-19T05:37:03Z",
"Driver": "local",
"Labels": {},
"Mountpoint": "/mnt/sda1/var/lib/docker/volumes/hello_world/_data",
"Name": "hello_world",
"Options": {},
"Scope": "local"
}
]
$ cd /mnt
bash: cd: /mnt: No such file or directory
IF I cannot cd to the Mountpoint then where is it being saved?
Also tried this
$ docker-machine ssh default
## .
## ## ## ==
## ## ## ## ## ===
/"""""""""""""""""\___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ / ===- ~~~
\______ o __/
\ \ __/
\____\_______/
_ _ ____ _ _
| |__ ___ ___ | |_|___ \ __| | ___ ___| | _____ _ __
| '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__|
| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__| < __/ |
|_.__/ \___/ \___/ \__|_____\__,_|\___/ \___|_|\_\___|_|
Boot2Docker version 18.06.1-ce, build HEAD : c7e5c3e - Wed Aug 22 16:27:42 UTC 2018
Docker version 18.06.1-ce, build e68fc7a
docker#default:~$ cd /mnt/sda1/var/lib/docker/volumes/hello_world/_data
-sh: cd: can't cd to /mnt/sda1/var/lib/docker/volumes/hello_world/_data
I am using Windows 8, Docker ToolBox

docker mapping volume directory not showing up [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 6 years ago.
Improve this question
When I map a directory it doesn't show up in my docker container.
I am on docker 1.11.2 on Mac using toolbox.
calloway$ docker -v
Docker version 1.11.2, build b9f10c9
calloway$ ls -ltr /tmp/foo/
total 0
-rw-r--r-- 1 calloway wheel 0 Jun 8 09:21 regularfile.txt
calloway$ docker run -it -v /tmp/foo:/mytmp -w /mytmp ubuntu bash
root#26fc182f7964:/mytmp# ls
root#26fc182f7964:/mytmp# exit
More exploration: /tmp mapped is /tmp on "default" Virtual Machine.
Joshuas-MBP:~ joshuacalloway$ docker-machine ssh default
## .
## ## ## ==
## ## ## ## ## ===
/"""""""""""""""""\___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ / ===- ~~~
\______ o __/
\ \ __/
\____\_______/
_ _ ____ _ _
| |__ ___ ___ | |_|___ \ __| | ___ ___| | _____ _ __
| '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__|
| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__| < __/ |
|_.__/ \___/ \___/ \__|_____\__,_|\___/ \___|_|\_\___|_|
Boot2Docker version 1.10.3, build master : 625117e - Thu Mar 10 22:09:02 UTC 2016
Docker version 1.10.3, build 20f81dd
docker#default:~$ mkdir /tmp/OnDefaultVM
docker#default:~$ touch /tmp/OnDefaultVM/myfile.txt
docker#default:~$ exit
Joshuas-MBP:~ joshuacalloway$ docker run -it -v /tmp/OnDefaultVM:/mytmp -w /mytmp ubuntu bash
root#1184ff43dc88:/mytmp# ls
myfile.txt

`foxx-manager update` fails inside corporate proxy environment

With docker 1.7.1, foxx-manager update gets error 500 when downloading master.zip from central repository. However, no error was occurred with docker 1.6.1.
Did anyone encounter this problem?
How can I specify https_proxy for arangosh? foxx-manager update fails inside corporate proxy environment.
I've tried these settings:
export https_proxy=http://xx.xx.xx.xx:port
export https_proxy=xx.xx.xx.xx:port
export HTTPS_PROXY=http://xx.xx.xx.xx:port
export HTTPS_PROXY=xx.xx.xx.xx:port
... and all failed.
Below is my session log:
[t.suwa#devstudy ~]$ docker run -d arangodb
e3175d53cd1fc288201bfeebaaf95084c1409c4299ce1b39369d131bf2964d0a
Usage of loopback devices is strongly discouraged for production use. Either use `--storage-opt dm.thinpooldev` or use `--storage-opt dm.no_warn_on_loop_devices=true` to suppress this warning.
[t.suwa#devstudy ~]$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e3175d53cd1f arangodb "/usr/sbin/arangod" 11 seconds ago Up 9 seconds 8529/tcp backstabbing_albattani
[t.suwa#devstudy ~]$ docker exec -it e3175d53cd1f /bin/bash
root#8155996d26ff:/# arangosh
_
__ _ _ __ __ _ _ __ __ _ ___ ___| |__
/ _` | '__/ _` | '_ \ / _` |/ _ \/ __| '_ \
| (_| | | | (_| | | | | (_| | (_) \__ \ | | |
\__,_|_| \__,_|_| |_|\__, |\___/|___/_| |_|
|___/
Welcome to arangosh 2.6.7 [linux]. Copyright (c) ArangoDB GmbH
Using Google V8 4.1.0.27 JavaScript engine, READLINE 6.3, ICU 54.1
Pretty printing values.
Connected to ArangoDB 'tcp://127.0.0.1:8529' version: 2.6.7 [standalone], database: '_system', username: 'root'
Type 'tutorial' for a tutorial or 'help' to see common examples
arangosh [_system]> fm.update()
JavaScript exception in file '/usr/share/arangodb/js/common/modules/org/arangodb/foxx/store.js' at 410,11: [ArangoError 1752: application download failed: Github download from 'https://github.com/arangodb/foxx-apps/archive/master.zip' failed with error code 500]
! throw err;
! ^
stacktrace: Error
at exports.throwDownloadError (/usr/share/arangodb/js/common/modules/org/arangodb-common.js:448:9)
at Object.update (/usr/share/arangodb/js/common/modules/org/arangodb/foxx/store.js:392:7)
at <shell command>:1:4
If your environment is behind proxy, please add these in Dockerfile.
ENV https_proxy=http://xx.xx.xx.xx:port
ENV https_proxy=xx.xx.xx.xx:port
ENV HTTPS_PROXY=http://xx.xx.xx.xx:port
ENV HTTPS_PROXY=xx.xx.xx.xx:port
ADD .gitconfig /.gitconfig
Local file .gitconfig should have proxy setting as well:
[http]
proxy = http://xx.xx.xx.xx:port
[https]
proxy = http://xx.xx.xx.xx:port
Suspose you install and run the applicaiton with root in container, if not, copy .gitconfig to that user's home directory.
Build the image with proxy, then you should be fine to download the package within container.

Issues getting Ansible to work with Boot2Docker

I'm using an Ansible playbook to manage installation of Docker containers. I have the following playbook, which installs Cassandra:
I want to run this playbook locally, and have it install into Boot2Docker. I am able to SSH into Boot2Docker using the instructions from this answer:
$ ssh -i $HOME/.ssh/id_boot2docker -p 2022 docker#localhost
## .
## ## ## ==
## ## ## ## ===
/""""""""""""""""\___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ / ===- ~~~
\______ o __/
\ \ __/
\____\______/
_ _ ____ _ _
| |__ ___ ___ | |_|___ \ __| | ___ ___| | _____ _ __
| '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__|
| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__| < __/ |
|_.__/ \___/ \___/ \__|_____\__,_|\___/ \___|_|\_\___|_|
Boot2Docker version 1.4.1, build master : 86f7ec8 - Tue Dec 16 23:11:29 UTC 2014
Docker version 1.4.1, build 5bc2ff8
docker#boot2docker:~$
I made an inventory file with the same SSH settings:
[local]
localhost ansible_ssh_port=2022 ansible_ssh_user=docker ansible_ssh_private_key_file=~/.ssh/id_boot2docker
But when I run the playbook, it fails with the error "/bin/sh: /usr/bin/python: not found":
$ ansible-playbook db-setup.yml -i hosts.local
PLAY [local] ******************************************************************
GATHERING FACTS ***************************************************************
failed: [localhost] => {"failed": true, "parsed": false}
/bin/sh: /usr/bin/python: not found
OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011
debug1: Reading configuration data /etc/ssh_config
debug1: /etc/ssh_config line 20: Applying options for *
debug1: /etc/ssh_config line 102: Applying options for *
debug1: auto-mux: Trying existing master
debug1: mux_client_request_session: master session id: 2
Shared connection to localhost closed.
TASK: [Database] **************************************************************
FATAL: no hosts matched or all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit #/Users/bryan/db-setup.retry
localhost : ok=0 changed=0 unreachable=0 failed=1
I still get the error even if "gather facts" is turned off. If I SSH into Boot2Docker, I can see that /usr/bin/python exists:
$ ssh -i $HOME/.ssh/id_boot2docker -p 2022 docker#localhost
...
docker#boot2docker:~$ which python
boot2docker ssh "tce-load -w -i python.tcz" does the trick as well (you need internet ;-)) for docker and ansible you will need "docker-py"
Setup pip, login to boot2docker
wget https://bootstrap.pypa.io/get-pip.py
python get-pip.py
pip install docker-py
also add to your inventory file:
dockerhost ansible_connection=ssh ansible_ssh_host=192.168.59.103 ansible_ssh_user=docker ansible_ssh_private_key_file=~/.ssh/id_boot2docker ansible_python_interpreter=/usr/local/bin/python
The solution was simple: Python isn't installed by default by Boot2Docker.
To install, run
$ boot2docker ssh "wget http://www.tinycorelinux.net/6.x/x86/tcz/python.tcz && tce-load -i python.tcz && rm -f python.tcz"
I created a script to do this automatically, see
https://gist.github.com/bcattle/90e64fbe808b3409ec2f

Resources