I using endly for end to end testing and my application is running on docker container, and I am facing this error. I already have my secret keys/folder in place.
[run[build]run|build[Init]docker.run ssh: no key found at exec.extract error]
[run[build]run|build[Init]docker.run build_Init: ssh: no key found at docker.run/exec.extract at workflow.run error]
[run[build]run|build[Init]docker.run build: build_Init: ssh: no key found at docker.run/exec.extract at workflow error]
build: build_Init: ssh: no key found at docker.run/exec.extract at workflow.run at workflow.run
[run[build]run|build[Init]docker.run run_init: build: build_Init: ssh: no key found at docker.run/exec.extract a error]
I followed the steps mentioned in github.com/viant/endly/tree/master/doc/secrets link. The contents of my localhost.json is {"Username":"xxx","EncryptedPassword":"xxx","PrivateKeyPath":"xxx/.ssh/id_rsa.pub"}
You would get this error message in endly if there is an error during ssh if endly is unable to use the secrets/credentials provided to ssh.
Could you provide additional details on the credentials file.
1. How you went about creating the file.
2. Contents of the file without your password.
I am assuming your are configuring to ssh to local when building and deploying you app so the secret file would be localhost.json.
Please refer to endly secret for creating credentials.
Related
Interrupting pulumi up command caused below error
error: could not get cloud url: failed to read Pulumi credentials file. Please re-run `pulumi login` to reset your credentials file: unexpected end of JSON input
When trying to run pulumi login it gives below error
error: could not determine current cloud: failed to read Pulumi credentials file. Please re-run `pulumi login` to reset your credentials file: unexpected end of JSON input
Any idea how to fix this
This is due to corrupted credentials file, Run which pulumi and the credentials.json file in the installation directroy and delete it. Run pulumi login
I was trying to build the docker image for a project I'm working onto.
It's based on jhipster, after configuring the project it tells me to run the following maven command:
./mvnw -ntp -Pprod verify jib:dockerBuild
Unfortunately it doesn't seem to work, it returns me this errors:
[WARNING] The credential helper (docker-credential-pass) has nothing for server URL: registry.hub.docker.com
...
[WARNING] The credential helper (docker-credential-pass) has nothing for server URL: index.docker.io
[WARNING]
And finally fails with:
[ERROR] Failed to execute goal com.google.cloud.tools:jib-maven-plugin:2.4.0:dockerBuild (default-cli) on project booking: (null exception message): NullPointerException -> [Help 1]
Recently I worked on a google cloud project, and I edited the ~/.docker/config.json configuration file. I had to remove google's configuration entries to sort out another problem. Could that be the origin of the problem I'm facing now?
I've tried to do docker logout and docker login without success.
Some considerations
I don't know if editing manually the configuration caused the error, in fact I'm pretty sure to have deleted only google-related entries, but nothing referring to docker.* or similar.
To solve this issue, avoid to edit manually the docker configuration file. In fact I think that it should be avoided whenever possible, to avoid configuration problems of any sort.
Instead, just follow what the error message is trying to tell you: docker is not able to access those urls. Excluding network problems (which you can troubleshoot with ping registry-1.docker.io for example), it should be an authentication problem.
How to fix
I've found out that running those commands fixed it:
docker login registry.hub.docker.com
docker login registry-1.docker.io
I don't know if registry-1.docker.io is just a mirror of the other first server, which the plugin tries to access after the first unsuccessful connection. You can try to loging to registry.hub.docker.com and re-launch the command to see if it sufficient. In case it's not, login to the second one and then it will work.
I ran jib via Gradle:
./gradlew jibDockerBuild
and got a similar error
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':jibDockerBuild'.
> com.google.cloud.tools.jib.plugins.common.BuildStepsExecutionException: Build to Docker daemon failed, perhaps you should make sure your credentials for 'registry-1.docker.io/library/openjdk' are set up correctly. See https://github.com/GoogleContainerTools/jib/blob/master/docs/faq.md#what-should-i-do-when-the-registry-responds-with-unauthorized for help
What ended up solving this error for me, bizarrely enough, was to log out of Docker Desktop.
I later also tried funder7's solution while logged in to Docker Desktop, and that also worked.
I just installed 19.03.2 on my windows 10 laptop. When I attempt any docker command such as docker login or docker version, I get the warning as below.
WARNING: Error loading config file: C:\Users\user-id.docker\config.json: json: cannot unmarshal string into Go value of type configfile.ConfigFile
My config.json looks as below
"quay.io": {"<hidden for security reasons>"}
Could not find any reference to error for docker login in google search. There are reference for type string, but not for configfile.
Any help is appreciated. Thanks in advance.
Clear the file, delete everything in: C:\Users\user-id.docker\config.json
From the command line run:
docker login -u your_docker_username "https://index.docker.io/v1/"
Type your password
Output:
Login Succeeded
It will generate "auths" credentials in C:\Users\user-id.docker\config.json file
You can use dokcer login again with no problems after that
I am trying to transfer all .sh files from one unix server to another using jenkins.
Files are getting transfer but it is coming in my unix home directory, I need to transfer it sudo user directory.
for example:
Source server name is "a" and target server name is "u"
we are using sell4 as sudo user in target server name
it should come in home directory of sell4 user
I have used the below command
Building in workspace /var/lib/jenkins/workspace/EDB-ExtractFilefromSVN
SSH: Connecting from host [a]
SSH: Connecting with configuration [u] ...
SSH: EXEC: STDOUT/STDERR from command [sudo scp *.sh sell4#u:/usr/app/TomcatDomain/ScoringTools_ACCDomain04/] ...
sudo: scp: command not found
SSH: EXEC: completed after 201 ms
SSH: Disconnecting configuration [u] ...
ERROR: Exception when publishing, exception message [Exec exit status not zero. Status [1]]
Gitcolony notification failed - java.lang.IllegalArgumentException: Invalid url:
Finished: UNSTABLE
Can you please suggest what I am going wrong here?
EDITS:
Adding the shell screenshot:
ah so it's some kind of plugin. It seems like you want to run local sudo to login to remote server user. It won't work this way. You can't open door to bathroom and expect walking into a garden.
sudo changes your local user to root, not remote server.
Do not use sudo with scp command but rather follow these answers:
https://unix.stackexchange.com/questions/66021/changing-user-while-scp
We are just starting with Jenkins for CI - so we're definitely in the newbie phase here. Here's what we are trying to do:
We are using Publish Over SSH Jenkins plugin to transfer the build artifacts to a target server and into a specific root folder - lets call it /var/mycompany/myapp and this is where the problem is happening.
We have configured the Publish Over SSH plugin to use a key in order to make the connection to the target server. In Manage Jenkins > Configure System > SSH Servers section we have also configured our target ssh server (name/hostname/username/remote folder etc). The connection is successful when tested.
The build job has been configured to "Send artifacts over SSH" as a post-build action. Now this works fine as long as I send the files to my user's home folder location on the target server (ie it will create the necessary ~/var/mycompany/mayapp folder structure and transfer all the files). BUT - if I change the "Publish over SSH" config to use a Remote Directory of "/" the job fails with the following error:
SSH: Connecting from host [myjenkins] SSH: Connecting with
configuration [stage-tester] ... SSH: Creating session: username
[myusername], hostname [x.x.x.x], port [22] SSH: Connecting session
... SSH: Connected SSH: Opening SFTP channel ... SSH: SFTP channel
open SSH: Connecting SFTP channel ... SSH: Connected SSH: cd [/] SSH:
OK SSH: cd [var] SSH: OK SSH: cd [mycompany] SSH: OK SSH: mkdir
[myapp] SSH: FAILED: Message [Permission denied]
At first this made sense to us as the key was created by a specific user who was not a sudoer on the target Linux/Fedora server. So We made the user a member of a sudoer group expecting that this would solve the problem. It hasn't fixed the issue - we continue to get the "permission denied" error. So the question is how to we go about gaining access to the server root for our user/key?
Any advice is appreciated.
Thanks!
In the end this is the solution I used:
Using instructions found here I setup a "jenkins-chef" ssh keypair in the /var/lib/jenkins/.ssh folder and transferred the public key to the jenkins user's authorizedkeys file on my target deployment server.
chown the keypair in /var/lib/jenkins/.ssh (from step 1) to the "jenkins" user so that the Jenkins ssh server configuration in step 3 can read the key
Set up an SSH server reference in Jenkins: under Manage Jenkins > Configure System > SSH Servers > Added new ssh server ref pointing at my target deployment server using the 'jenkins' username and path to the new key (/var/lib/jenkins/.ssh/jenkins-chef). Also, we wanted to publish to a folder off of the target server's root (/) folder and it turned out to be important that we specify the "remote directory" as '/' (root)
In my Jenkins job I added a new build step using the 'send files or execute commands over ssh' plugin. I configured it to use the ssh server I defined in step 3:
source files: **/*
exec command: sudo chef-client
remote directory: var/my-apps/my-app
Note: If I attempted to specify remote directory with initial root slash (/var/my-apps/my-app) it would copy then to the Jenkins user's home folder and simply create the specified folder structure. That's why specifying '/' in step 3 was important.
That's it from the Jenkins side. However the first time I tried to run this I got this error:
SSH: EXEC: STDOUT/STDERR from command [sudo chef-client] ... sudo: no
tty present and no askpass program specified
This was because I was attempting to run 'sudo' which issues a password challenge on the target server. To avoid this I made the following change to the sudoers file on the target server using the 'visudo' command: at the bottom of the file I added this line to give the jenkins user the ability to run sudo without being prompted for a password:
jenkins ALL=(ALL) NOPASSWD:ALL
Once that was done it all worked as expected.
As you are working with Linux machines, you can try to use the SCP Plugin.
In the global configuration, you just have to define the target server like that:
You can use your jenkins public ssh key to manage the authentication or a user/pwd.
Next, in the Jenkins job, you can create a post build task to copy the relevant artifacts on your server: