Facing a Problem While Installing Acumos using One click Deploy method(Kubernetes) - docker

I have Followed below process for installing Acumos in an Ubuntu 18 Server.
Open a shell session (bash recommended) on the host on which (for single AIO deployment) or from which (for peer-test deployment) you want to install Acumos, and clone the system-integration repo:
> git clone https://gerrit.acumos.org/r/system-integration
If you are deploying a single AIO instance, run the following command, selecting docker or kubernetes as the target environment. Further instructions for running the script are included at the top of the script.
> bash oneclick_deploy.sh
I have done it using k8s as below
> bash oneclick_deploy.sh k8s
Everything was running smoothly but at the end i am facing the below issue .
as docker API is not ready
Can anyone help me on this Please?
Note: I have checked in the kubernetes console everything is fine . A service file is created and also namespace is also created sucessfully in the name of acumos .

I'm the developer of that toolset. I'll be happy to help you thru this. Note that it's actively being developed, and will be evolving a lot. But there are some easy things you can do to provide more details so I can debug your situation.
First, start with a clean env:
$ bash clean.sh
Then reattempt the deployment, piping the console log to a file:
$ bash oneclick_deploy.sh k8s 2>&1 | tee deploy.log
Review that file to be sure that there's nothing sensitive in it (e.g. passwords or other private info about the deployment that you don't want to share), and if possible attach it here so I can review it. That will be the quickest way to debug.
Also you can let me know some more about your deployment context:
Did you ensure the Prerequisites:
Ubuntu Xenial (16.04), Bionic (18.04), or Centos 7 hosts
All hostnames specified in acumos-env.sh must be DNS-resolvable on all hosts (entries in /etc/hosts or in an actual DNS server)
Did you customize acumos-env.sh, or use the default values
Send the output of
$ kubectl get svc -n acumos
$ kubectl get pods -n acumos
$ kubectl describe pods -n acumos

Related

Kubernetes - not enabled - with Docker for desktop (windows 10) but "kubectl cluster-info" works? Why?

I uninstalled Docker and installed it again (using the stable release channel).
Is it normal that the command "kubectl cluster-info" shows the output:
Kubernetes master is running at https://localhost:6445
But Kubernetes is not enabled in the Docker settings.
Thanks.
I have reproduced your case.
If you install Docker on Windows10 without any other Kubernetes configuration it will return output:
$ kubectl cluster-info
Kubernetes master is running at http://localhost:8080
When you will enable Kubernetes in Docker for Windows you will receive output:
$ kubectl cluster-info
Kubernetes master is running at http://localhost:6445
KubeDNS is running at https://localhost:6445/api/v1/namespace/kube-system/services/kube-dns/proxy
After reinstall I have checked current kubernetes config and it was as below
$ kubectl config view
In config you will still have
...
server: https://localhost:6445
...
Even after I deleted docker via Control Panel I still had C:\Users\%USERNAME%\.docker and C:\Users\%USERNAME%\.kube directories with config.
To back to default you need to delete Docker, remove manually .docker and .kube directories with configs and install docker.

Configure Docker with proxy per host/url

I use Docker Toolbox on Windows 7 in a corporate environment. My workflow requires pulling containers from one artifactory and pushing them to a different one (eg. external and internal). Each artifactory requires a different proxy to access it. Is there a way to configure Docker daemon to select proxy based on a URL? Or, if not, what else can I do to make this work?
Since, as Pierre B. mentioned, Docker daemon does not support URL-based proxy selection, the solution is to point it to a local proxy configured to select the proper upstream proxy based on the URL.
While any HTTP[S] proxy capable of upstream selection would do, (pac4cli project being particularly interesting for it's advertised capability to select the upstream based on proxy-auto-discovery protocol used by most Web browsers a in corporate setting), I've chosen to use tinyproxy, as more mature and light-weight solution. Furthermore, I've decided to run my proxy inside the docker-machine VM in order to simplify it's deployment and make sure the proxy is always running when the Docker daemon needs it.
Below are the steps I used to set up my system. I'm especially grateful to phoenix for providing steps to set up Docker Toolbox on Windows behind a corporate proxy, and will borrow heavily from that answer.
From this point on I will assume either Docker Quickstart Terminal or GitBash, with docker in the PATH, as your command line console and that "username" is your Windows user name.
Step 1: Build tinyproxy on your target platform
Begin by pulling a clean Linux distribution, I used CentOS, and run bash inside it:
docker run -it --name=centos centos bash
Next, install the tools we'll need:
yum install -y make gcc
After that we pull the latest release of Tinyproxy from it's GitHub repository and extract it inside root's home directory (at the time of this writing the latest release was 1.10.0):
cd
curl -L https://github.com/tinyproxy/tinyproxy/releases/download/1.10.0/tinyproxy-1.10.0.tar.gz \
| tar -xz
cd tinyproxy-1.10.0
Now let's configure and build it:
./configure --enable-upstream \
--disable-filter\
--disable-reverse\
--disable-transparent\
--disable-xtinyproxy
make
While --enable-upstream is obviously required, disabling other default features is optional but a good practice. To make sure it actually works run:
./src/tinyproxy -h
You should see something like:
Usage: tinyproxy [options]
Options are:
-d Do not daemonize (run in foreground).
-c FILE Use an alternate configuration file.
-h Display this usage information.
-v Display version information.
Features compiled in:
Upstream proxy support
For support and bug reporting instructions, please visit
<https://tinyproxy.github.io/>.
We exit the container by pressing Ctrl+D and copy the executable to a special folder location accessible from the docker-machine VM:
docker cp centos://root/tinyproxy-1.10.0/src/tinyproxy \
/c/Users/username/tinyproxy
Substitute "username" with your Windows user name. Please note that double slash — // before "root" is required to disable MINGW path conversion.
Now we can delete the container:
docker rm centos
Step 2: Point docker daemon to a local proxy port
Choose a TCP port number to run the proxy on. This can be any port that is not in use on the docker-machine VM. I will use number 8618 in this example.
First, let's delete the existing default Docker VM:
WARNING: This will permanently erase all currently stored containers and images
docker-machine rm -f default
Next, we re-create the default machine setting HTTP_PROXY and HTTPS_PROXY environment variables to the local host and the port we selected, and then refresh our shell environment:
docker-machine create default \
--engine-env HTTP_PROXY=http://localhost:8618 \
--engine-env HTTPS_PROXY=http://localhost:8618
eval $(docker-machine env)
Optionally, we could also set NO_PROXY environment variable to list hosts and/or wildcards (separated by ;) to which the daemon should connect directly, bypassing the proxy.
Step 3: Set up tinyproxy inside docker-machine VM
First, we will create two files in the /c/Users/username directory (this is where our tinyproxy binary should reside after Step 1 above) and then we'll copy them to the VM.
The first file is tinyproxy.conf, the exact syntax is documented on the Tinyproxy website, but the example below should have all the settings need:
# These settings can be customized to your liking,
# the port though must be the same we used in Step 2
listen 127.0.0.1
port 8618
user nobody
group nogroup
loglevel critical
syslog on
maxclients 50
startservers 2
minspareServers 2
maxspareServers 5
disableviaheader yes
# Here is the actual proxy selection, rules apply from top
# to bottom, and the last one is the default. More info on:
# https://tinyproxy.github.io/
upstream http proxy1.corp.example.com:80 ".foo.example.com"
upstream http proxy2.corp.example.com:80 ".bar.example.com"
upstream http proxy.corp.example.com:82
In the example above:
http://proxy1.corp.example.com:80 will be used to connect to URLs that end with "foo.example.com", such as http://www.foo.example.com
http://proxy2.corp.example.com:80 will be used to connect to URLs that end with "bar.example.com", such as http://www.bar.example.com, and
http://proxy.corp.example.com:80 will be used to connect all other URLs
It is also possible to match exact host names, IP addresses, subnets and hosts without domains.
The second file is as the shell script that will launch the proxy, its name must be bootlocal.sh:
#! /bin/sh
# Terminate on error
set -e
# Switch to the script directory
cd $(dirname $0)
# Launch proxy server
./tinyproxy -c tinyproxy.conf
Now, let's connect to the docker VM, get root, and switch to boot2docker directory:
docker-machine ssh
sudo -s
cd /var/lib/boot2docker
Next, we'll copy all three files over and a set their permissions:
cp /c/Users/username/boot2docker/{tinyproxy{,.conf},bootlocal.sh} .
chmod 755 tinyproxy bootlocal.sh
chmod 644 tinyproxy.conf
Exit VM session by pressing Ctrl+D twice and restart it:
docker-machine restart default
That's it! Now docker should be able pull and push images from different URLs automatically selecting the right proxy server.

Key error trying to run basic ubuntu image

I'm trying to run a basic image as described in the Docker tutorial:-
docker --version
Docker version 1.9.0, build 76d6bc9
docker run -t -i ubuntu:14.04 /bin/bash
but it is reporting:
Error while pulling image: Get https://index.docker.io/v1/repositories/library/ubuntu/images: x509: certificate is valid for FG3K6C3A15800002, not index.docker.io
I'm behind a corporate firewall, so have set http_proxy and https_proxy env variables appropriately. The server itself is Ubuntu 14.04 LTS.
I've read various posts about clock settings etc, but these seem OK.
Has anyone any ideas?
Even though you state having set your proxy variable, make sure to try this full set of proxy variables n your /etc/default/docker:
export "HTTP_PROXY=http://<user>:<password>#<proxy.company.com>:<port>"
export "HTTPS_PROXY=http://<user>:<password>#<proxy.company.com>:<port>"
export "http_proxy=http://<user>:<password>#<proxy.company.com>:<port>"
export "https_proxy=http://<user>:<password>#<proxy.company.com>:<port>"
export "NO_PROXY=.company.com,.sock,localhost,127.0.0.1,::1"
export "no_proxy=.company.com,.sock,localhost,127.0.0.1,::1"
If that doesn't solve the issue, upgrade docker to the latest 1.10.1.
Note: docker machine issue 531 mentions docker-machine provision as a workaround.
Update 2021 on issue 531
I had the same exact issue just now and apparently it was fixed by resetting docker to factory settings and enabling the Kubernetes cluster again.
EDIT: I managed to reproduce the fix on a second machine. The exact steps in my case were:
start Docker Desktop
update to 3.2.1 -> immediately after this another updated was available to 3.2.2
update to 3.2.2
enable Kubernetes cluster -> wait until the error appears
right click on Docker in the System Tray -> choose Troubleshoot
click Reset to factory defaults -> wait until the reset is finished
right click on Docker in the System Tray -> choose Quit Docker Desktop
open Docker Desktop again
select only Enable Kubernetes

Error in Docker: bad address to executables

I'm trying to something with Docker.
Steps I'm doing:
- Launch Docker Quickstart Terminal
- run docker run hello-world
Then I get error like:
bash: /c/Program Files/Docker Toolbox/docker: Bad address
I have to say that I was able to run hello-world image, but now I'm not. I don't know what happend.
I don't know if it matters however I had some problems at instalation step.
Since I have git installed in non standard location. However it seems git bash.exe working correctly for Docker.
My environment:
Windows 10
Git 2.5.0 (installed before Docker)
Docker Toolbox 1.9.1a
I have the same issue with bash: /c/Program Files/Docker Toolbox/docker: Bad address
I thought the problems is "bash doesn't support docker.exe".
SO I fix this problem by use powershell ,not the bash.
and if you use powershell maybe face this
An error occurred trying to connect: Get http://localhost:2375/v1.21/containers/json: dial tcp 127.0.0.1:2375: ConnectExenter code here
tcp: No connection could be made because the target machine actively refused it.
You can export variable from bash use export and import to powershell by this below
$env:DOCKER_HOST="tcp://192.168.99.100:2376"
$env:DOCKER_MACHINE_NAME="default"
$env:DOCKER_TLS_VERIFY="1"
$env:DOCKER_TOOLBOX_INSTALL_PATH="C:\\Program Files\\Docker Toolbox"
$env:DOCKER_CERT_PATH="C:\\Users\\kk580\\.docker\\machine\\machines\\default"
that's all
ps:I found this problem fixed by update git from 2.5.0 to 2.6.3.
Not entirely sure what the issue is, report it to the project on github. I find the docker mac and windows tools a bit flakey from time to time as they are still maturing. If you don't mind seeing what's underneath, you can try running docker-machine directly or set up your own host pretty quickly with Vagrant.
Docker Machine
Run a command or bash prompt to see what machines you have.
docker-machine ls
Create a machine if you don't have one listed
docker-machine create -d "virtualbox" default-docker
Then connect to the listed machine (or default-docker)
docker-machine ssh default-docker
Vagrant
If that doesn't work you can always use vagrant to manage VM's
Install VirtualBox (Which you probably have already if you installed the toolbox)
Reinstall Git, make sure you select the option for adding ALL the tools to your system PATH (for vagrant ssh)
Install Vagrant
Run a command or bash prompt
mkdir docker
cd docker
vagrant init debian/jessie64
vagrant up --provider virtualbox
Then to connect to your docker host you can run (from the same docker directory you created above)
vagrant ssh
Now your on the docker host, Install the latest docker the first time
curl https://get.docker.com/ | sudo sh
Docker
Now you have either a vagrant or docker-machine host up, you can docker away after that.
sudo docker run -ti busybox bash
You could also use PuTTY to connect to vagrant machines instead of installing git/ssh and running vagrant ssh. It provides a nicer shell experience but it requires some manual setup of the ssh connections.

jstack and other tools on google cloud dataflow VMs

Is there a way to run jstack on the VMs created for Dataflow jobs?
I'm trying to see where the job spends most of the CPU time and I can't find it installed.
Thanks,
G
A workaround which I found to work:
Log on to the machine
Find the docker container that runs "python -m taskrunne" using sudo docker ps
Connect to the container using sudo docker exec -i -t 9da88780f555 bash (replacing the container id with the one found in step 2)
Install openjdk-7-jdk using apt-get install openjdk-7-jdk
Find the process id of the java executable
Run /usr/bin/jstack 1437
This Github issue update includes some basic instructions for getting profiles using the --enableProfilingAgent option.
This doesn't answer the "and other tools" part of your question, but:
Dataflow workers run a local http server that you can use to get some info. Instead of using jstack you can get a thread dump with this:
curl http://localhost:8081/threadz
I'm not familiar with jstack but based on a quick Google search it looks like jstack is a tool that runs independently from the JVM and just takes a PID. So you can do the following while your job is running.
ssh into one of the VMs using gcutil ssh
Install jstack on the VM.
Run ps -aux | grep java to identify the PID of the java process.
Run jstack using the PID you identified.
Would that work for you? Are you trying to run jstack from within your code so as to profile it automatically?

Resources