$ docker login -u uploader -p ****** http://10.11.20.186:8082 [14:13:41]
Error: credentials key has https[s]:// prefix
$ docker -v [14:28:20]
podman version 3.4.1-dev
$ cat /etc/os-release [14:28:59]
NAME="CentOS Stream"
VERSION="8"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="8"
PLATFORM_ID="platform:el8"
PRETTY_NAME="CentOS Stream 8"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:8"
HOME_URL="https://centos.org/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 8"
REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream"
sheng#B-Product-U-WEB01 /etc/containers/registries.conf.d
$ cat 003-nexus.conf [14:45:28]
[[registry]]
prefix = "10.11.20.186:8082"
location = "10.11.20.186:8082"
insecure = true
I want to log in to my nexus repository manager.
However, when I attempt to do this I get the error.
Error: credentials key has https[s]:// prefix
docker login -u uploader -p 1qaz2wsx 10.11.20.186:8082
This should work without a downgrade. Just remove 'https://'.
dnf downgrade podman-docker -y
sheng#B-Product-U-WEB01 /etc/containers/registries.conf.d
$ docker -v [17:27:36]
podman version 3.3.0-dev
sheng#B-Product-U-WEB01 /etc/containers/registries.conf.d
$ docker login -u uploader -p ****** https://10.11.20.186:8082 [17:27:38]
Login Succeeded!
it's work
My Jenkins requires https://, without it, it fails with
java.net.MalformedURLException: no protocol: quay.io
at java.net.URL.<init>(URL.java:611)
at java.net.URL.<init>(URL.java:508)
at java.net.URL.<init>(URL.java:457)
at org.jenkinsci.plugins.docker.commons.credentials.DockerRegistryEndpoint.getEffectiveUrl(DockerRegistryEndpoint.java:145)
at org.jenkinsci.plugins.docker.commons.credentials.DockerRegistryEndpoint.newKeyMaterialFactory(DockerRegistryEndpoint.java:300)
at org.jenkinsci.plugins.docker.workflow.RegistryEndpointStep$Execution2.newKeyMaterialFactory(RegistryEndpointStep.java:95)
at org.jenkinsci.plugins.docker.workflow.AbstractEndpointStepExecution2.doStart(AbstractEndpointStepExecution2.java:52)
at org.jenkinsci.plugins.workflow.steps.GeneralNonBlockingStepExecution.lambda$run$0(GeneralNonBlockingStepExecution.java:77)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
The solution turned out to be a system upgrade to the latest version of the RHEL 8.6 packages through dnf upgrade. Now I have
# docker --version
podman version 4.0.2
which allows me to use the https:// prefix without complaints.
Related
I have a cap rover instance in my digital ocean instance that I created. I want to use teh caprover instance to run cap rover sample apps.
I opened the digital ocean droplet web console in order to run a caprover isntance.
I ran the following lines of code:
ufw allow 80,443,3000,996,7946,4789,2377/tcp; ufw allow 7946,4789,2377/udp;
and got this:
Skipping adding existing rule
Skipping adding existing rule (v6)
Skipping adding existing rule
Skipping adding existing rule (v6)
I then ran this:
docker run -p 80:80 -p 443:443 -p 3000:3000 -v /var/run/docker.sock:/var/run/docker.sock -v /captain:/captain caprover/caprover
I got this:
Unable to find image 'caprover/caprover:latest' locally
latest: Pulling from caprover/caprover
Digest: sha256:39c3f188a8f425775cfbcdc4125706cdf614cd38415244ccf967cd1a4e692b4f
Status: Downloaded newer image for caprover/caprover:latest
docker: Error response from daemon: driver failed programming external connectivity on endpoint priceless_sammet (9da9028cfc4873818f113458237ebd00f9c64fa648b853730a60b10bea39c720): Bind for 0.0.0.0:3000 failed: port is already allocated.
I tried changing the ports to:
docker run -p 81:81 -p 444:444 -p 3321:3321 -v /var/run/docker.sock:/var/run/docker.sock -v /captain:/captain caprover/caprover
and got this:
Captain Starting ...
Installing Captain Service ...
Installation of CapRover is starting...
For troubleshooting, please see: https://caprover.com/docs/troubleshooting.html
>>> Checking System Compatibility <<<
Docker Version passed.
Ubuntu detected.
X86 CPU detected.
Total RAM 1033 MB
Are your trying to run CapRover on a local machine or a machine without public IP?
In that case, you need to add this to your installation command:
-e MAIN_NODE_IP_ADDRESS='127.0.0.1'
Otherwise, if you are running CapRover on a VPS with public IP:
Your firewall may have been blocking an in-use port: 80
A simple solution on Ubuntu systems is to run "ufw disable" (security risk)
Or [recommended] just allowing necessary ports:
ufw allow 80,443,3000,996,7946,4789,2377/tcp; ufw allow 7946,4789,2377/udp;
See docs for more details on how to fix firewall issues
Finally, if you are an advanced user, and you want to bypass this check (NOT RECOMMENDED),
you can append the docker command with an addition flag: -e BY_PASS_PROXY_CHECK='TRUE'
Installation failed.
Error: Port seems to be closed: 80
at Request._callback (/usr/src/app/built/utils/CaptainInstaller.js:149:24)
at Request.self.callback (/usr/src/app/node_modules/request/request.js:185:22)
at Request.emit (events.js:400:28)
at Request.<anonymous> (/usr/src/app/node_modules/request/request.js:1154:10)
at Request.emit (events.js:400:28)
at IncomingMessage.<anonymous> (/usr/src/app/node_modules/request/request.js:1076:12)
at Object.onceWrapper (events.js:519:28)
at IncomingMessage.emit (events.js:412:35)
at endReadableNT (internal/streams/readable.js:1334:12)
at processTicksAndRejections (internal/process/task_queues.js:82:21)
How can I open port 80, 443, and 3000 so that I can run the cap rover instance
I tried to connect on remote server with docker context
when tried docker ps from client(local mac) , I got this error message.
error during connect: Get "http://docker.example.com/v1.24/containers/json": command [ssh -l wwww -- tane-dev-0.ccn docker system dial-stdio] has exited with exit status 255, please make sure the URL is valid, and Docker 18.09 or later is installed on the remote host: stderr=wwww#xxx-dev-0.xxx: Permission denied (publickey,gssapi-keyex,gssapi-with-mic)
After googled about this problem, there's few things to checkup
docker version later 18.09 ✅
client: 20.10.14
remote: 20.10.17
register ssh key with ssh-agent ✅
ssh-add ~/.ssh/id_rsa
Identity added: /Users/ma_kyeongwook/.ssh/id_rsa (ma_kyeongwook#xxx.xxx)
ssh-add -l
4096 SHA256:~~~ ma_kyeongwook#xxx.xxx (RSA)
ssh connect test ✅
also checked cat ~/.ssh/authorized_keys contains client's pub key
Did i missed something?
I'm trying to "su" in the container deployed in IBM Containers/Bluemix.
But it fails like this.
root#ubuntu142:/tmp# cf ic exec -it mysshd bash
[root#instance-001652d1 /]# adduser ubuntu
[root#instance-001652d1 /]# su - ubuntu
su: cannot create child process: Resource temporarily unavailable
This works fine in my local docker environment.
I also tried to "su" in the startup script( user is already defined ), but it also failed with the same message( from the log ).
( Actually, I'm trying to deploy DB2-Express-C using "su db2inst1".. )
Is there any restriction that "su" is prohibited in IBM container?
Thanks in advance.
I've seen this problem in Centos 7 container instances only (it works fine for Ubuntu).
Here is the solution to fix it:
$ cf ic exec -it ads-centos bash
[root#instance-00173f1f /]# adduser ubuntu
[root#instance-00173f1f /]# su - ubuntu
su: cannot create child process: Resource temporarily unavailable
[root#instance-00173f1f /]# cd etc
[root#instance-00173f1f etc]# cd pam.d
[root#instance-00173f1f pam.d]# vi su
** change all session variables to 'optional' and save changes **
** see how file should be below **
[root#instance-00173f1f pam.d]# su - ubuntu
[ubuntu#instance-00173f1f ~]$ id
uid=1000(ubuntu) gid=1000(ubuntu) groups=1000(ubuntu)
[ubuntu#instance-00173f1f ~]$
Here is how the su file should be
#%PAM-1.0
auth sufficient pam_rootok.so
# Uncomment the following line to implicitly trust users in the "wheel" group.
#auth sufficient pam_wheel.so trust use_uid
# Uncomment the following line to require a user to be in the "wheel" group.
#auth required pam_wheel.so use_uid
auth substack system-auth
auth include postlogin
account sufficient pam_succeed_if.so uid = 0 use_uid quiet
account include system-auth
password include system-auth
session optional system-auth
session optional postlogin
session optional pam_xauth.so
I get this error when I try to connect to a node with kubernetes master running as a container: "PTY allocation request failed on channel 0"
Steps to reproduce:
I run a mac with OS X el Captain 10.11.1.
download standard centos 7.1 from oxboxes.
start in virtualbox 5.0.10. 1 natted interface. 1 port forward from host:2200->guest:22.
install docker 1.9.
ssh into the centos 6 run the following(as per kubernetes user manual):
6.a docker run --net=host -d gcr.io/google_containers/etcd:2.0.12 /usr/local/bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data
6.b docker run --volume=/:/rootfs:ro --volume=/sys:/sys:ro --volume=/dev:/dev --volume=/var/lib/docker/:/var/lib/docker:ro --volume=/var/lib/kubelet/:/var/lib/kubelet:rw --volume=/var/run:/var/run:rw --net=host --pid=host --privileged=true -d gcr.io/google_containers/hyperkube:v1.0.1 /hyperkube kubelet --containerized --hostname-override="127.0.0.1" --address="0.0.0.0" --api-servers=http://localhost:8080 --config=/etc/kubernetes/manifests
6.c docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v1.0.1 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2
ssh into the centos again, you'll get the following error: "PTY allocation request failed on channel 0"
I'm opening this issue in kubernetes because otherwise the above configuration seems to be working fine. Only when I start kubernetes the issue shows up.
Thanks
Raffaele
I first tried installing VirtualBox by downloading "VirtualBox 5.0 for OS X hosts (amd64)" from the VirtualBox download page, and then installed boot2docker and docker via brew.
The first apparent issue appeared when creating the boot2docker-vm image:
$ boot2docker init
2015/07/27 21:38:13 Creating VM boot2docker-vm...
2015/07/27 21:38:13 Apply interim patch to VM boot2docker-vm (https://www.virtualbox.org/ticket/12748)
2015/07/27 21:38:13 Failed to modify VM "boot2docker-vm": exit status 1
Launching the VirtualBox manager application I can see the boot2docker-vm machine "Running", but looking at the log I see something like this which appears to indicate that the boot2docker-vm "machine" failed to boot:
00:00:04.169546 Guest Log: BIOS: Boot : bseqnr=1, bootseq=4231
00:00:04.169711 Guest Log: BIOS: Boot from Floppy 0 failed
00:00:04.170101 Guest Log: BIOS: Boot : bseqnr=2, bootseq=0423
00:00:04.170490 Guest Log: BIOS: CDROM boot failure code : 0002
00:00:04.170800 Guest Log: BIOS: Boot from CD-ROM failed
00:00:04.171190 Guest Log: BIOS: Boot : bseqnr=3, bootseq=0042
00:00:04.171795 Guest Log: int13_harddisk: function 02, unmapped device for ELDL=80
00:00:04.172304 Guest Log: BIOS: Boot from Hard Disk 0 failed
00:00:04.172706 Guest Log: BIOS: Boot : bseqnr=4, bootseq=0004
00:00:04.172991 Guest Log: BIOS: Booting from LAN...
00:00:04.191271 Display::handleDisplayResize(): uScreenId = 0, pvVRAM=0000000000000000 w=720 h=400 bpp=0 cbLine=0x0, flags=0x1
00:00:06.446949 Guest Log: BIOS: Boot from LAN failed
00:00:06.448852 Guest Log: Could not read from the boot medium! System halted.
I uninstalled everything and then tried downloading and installing from boot2docker download page, which installs VirtualBox, boot2docker, and docker all in one go. But I still see the same problem indicated above (the boot2docker-vm machine fails to boot).
I'm reluctant to make big changes to the OS X version on my laptop, since my hardware is old. But I'll try the installation sequence on a more modern machine and see if it works there.
Has anyone managed to make docker work on OS X Version 10.9.5?
EDIT (adding additional information which comments suggest might be relevant):
My machine has:
2.26GHz Intel Core 2 Duo
4Gb of RAM (1067 MHz DDR3)
NVIDIA GeForce 9400M 256 MB
OS X 10.9.5
I installed everything as the primary User (not root) on my system.
And the versions of everything which I installed are:
VirtualBox 4.3.30 r101610
boot2docker version 1.7.1
docker version 1.7.1
This issue on github might be of help (Latest version of virtual box 4.3.x works fine in the issue described). Though I would suggest to use docker-machine. Below are the steps (Installation):
$ docker-machine create --driver virtualbox dev
$ eval "$(docker-machine env dev)"
And then you can use docker commands as usual.
Some of the comments in the github issue suggested by nash_ag and this stackoverflow question pointed me in the right direction.
This is the sequence of steps I used to get VirtualBox, boot2docker, docker, and docker-machine working in my environment (where $USERNAME is my primary OS X User, who installed VirtualBox), with several wrong turns elided, and most output omitted:
$ rm -rf /Users/$USERNAME/VirtualBox\ VMs/
$ boot2docker delete
(ran VirtualBox Uninstall script from my desktop)
...
$ brew tap caskroom/cask
...
$ brew update
...
$ brew install brew-cask
...
$ brew cask install virtualbox
...
$ VBoxManage -v
5.0.0r101573
$ boot2docker -v
Boot2Docker-cli version: v1.7.1
Git commit: 8fdc6f5
$ VBoxManage list vms
(nothing)
$ boot2docker init -v
...
$ boot2docker up
...
$ eval "$(boot2docker shellinit)"
(writes .pem files)
$ brew install docker-machine
...
$ docker-machine -v
docker-machine version 0.3.1 (HEAD)
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM
$ docker-machine create --driver virtualbox dev
...
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM
dev virtualbox Running tcp://192.168.99.100:2376
$ VBoxManage list vms
"boot2docker-vm" {99d5c5c1-e7cc-49bf-93c7-b0cbf626d62c}
"dev" {341fd11e-86f9-46ca-89e6-39ee78458a4b}
$ eval "$(docker-machine env dev)"
$ docker run -d -p 8000:80 nginx
...
$ curl $(docker-machine ip dev):8000
<!DOCTYPE html>
...
At this point things appear to be working well enough for me to use the "standard" docs/instructions for docker and docker-machine, so my original problem is solved.