Why sshpass does not get the same output with ssh? - ros

I am trying to test a c/s application running on remote machine. The expected result is client calling for service, the service node will coredump with dumpfile generated.
When I ssh onto the remote machine, it works with coredump generated.
When I run the same cmd with sshpass, however, my service does't crash and no coredump file generated, but I get the same stdout info.
I am totally puzzed, sshpass differs with ssh not only password management?

Related

Timeout not working for SSHFS connection in script

I want to automate a sftp-based backup in a alpine-based docker container.
I've got a /bin/sh script that should check if the connection is established successfully.
Clients connect via ssh keys, so it is passwordless authentication.
timeout -k 1 4 sshfs -p $port -o IdentityFile=/home/ssh/ssh_host_rsa_key,StrictHostKeyChecking=accept-new,_netdev,reconnect $user#$address:/ /mnt/sftp/
This line establishes the sftp connection. It works just fine if the key is correct and even when the server is refusing the connection. But there is a problem when the server isn't accepting the key provided, then it's asking for a password in the interactive shell like:
user123#backup.example.xyz's password:
and timeout just does not kill the process, the script doesn't go foroward after this since it waits for user input (which is not going to happen).
I use this script at startup to check the connection and stop the container immediately if it fails, so the user notices configuration errors right as he starts the container.
Is there a way to kill this command after a certain time or as a workaround prohibit interactive input for the sshfs command?
Thanks!

`docker attach` in Google Compute Engine VM not showing output, cannot exit back to shell

I have a container with a python script that runs at startup, that I'm using to verify basic VM functionality.
while True:
print('Looping forever')
time.sleep(3)
pass
I have deployed this to a GCE VM instance with stdin buffer enabled.
The GCE instance is green-checkmarked.
I can connect to the VM using browser window ssh and see the container running.
I can docker attach to the active container.
What's not working:
I don't see any output from the script when I look at the VM logs in the Google Cloud console.
I don't see any output when attached to the active container. I can't use Ctrl+C or Ctrl+Z to exit back to shell.
I can docker run $image inside the ssh session, but I don't see any output and can't exit back to shell (same problem as with docker attach above).
If I close the browser ssh window and open a new browser ssh window, I can now see two containers running, the original one and the one that I launched in the previous ssh session using docker run.
I feel like there is something stupidly trivial that I've forgotten to set up.
===== EDIT =====
I found that even when I docker run locally, I don't see output and can't exit. I have to use kill in another terminal window to kill it.
When I run docker run -it $image in the VM's browser ssh terminal, I also see the output, which is good.
I think there's some behavior of docker attach that is working as intended, just not intuitive. I'd still like to achieve one of these goals:
Be able to see the output from the running container in the VM ssh session.
Be able to see the output from the running container in cloud logs.
Answering my own question for posterity: Need to set up cloud logging first
https://cloud.google.com/logging/docs/setup/python

testcafe failing to connect to server when using --proxy option

I'm trying to run testcafe in our pipeline (Semaphore) using a docker image based on the official one, where the only additions are copying our tests inside it and install some other additional npm packages used by them. Those tests run against a test environment, that for security reasons can only be accessed either via VPN or a proxy. I'm using the --proxy flag but the test run fails with the message:
ERROR Unable to establish one or more of the specified browser connections
1 of 1 browser connections have not been established:
- chromium:headless
Hints:
- Use the "browserInitTimeout" option to allow more time for the browser to start. The timeout is set to 2 minutes for local browsers and 6 minutes for remote browsers.
- The error can also be caused by network issues or remote device failure. Make sure that the connection is stable and the remote device can be reached.
I'm trying to find out what's the problem, but as testcafe doesn't have a verbose mode, and the --dev flag doesn't seem to log anything anywhere; so I don't have any clue why it's not connecting. My test command is:
docker run --rm -v ~/results:/app/results --env HTTP_PROXY=$HTTP_PROXY $ECR_REPO:$TESTCAFE_IMAGE chromium:headless --proxy $HTTP_PROXY
If I try to run the tests without the proxy flag, they reach the test environment; can't run the tests as the page shown is not our app but a maintenace page served by default for connections outside the vpn or that doesn't come from the proxy.
If I go inside the testcafe container and run:
curl https://mytestserver.dev --proxy $HTTP_PROXY
it connects without any problem.
I've also tried to use firefox:headless instead of Chromium, but I've found out that it actually ignores the --proxy option altogether (reckon it's a bug).
We have a cypress container in that same pipeline going through that same proxy and it connects and runs the tests flawlessly.
Any insight about what the problem could be would be much appreciated.

Can ansible ping and SSH into machine but playbook fails due to "Host key verification failed" error

I have a Jenkins project which pulls and containerises changes from the given repo and then uses an Ansible playbook to deploy to the host machine/s. There are over 10 different server groups in my /etc/ansible/hosts file, all of which can be pinged successfully using ansible -m ping all and SSH'd into from the Jenkins machine.
I spun up a new VM, added it to the hosts file and used ssh-copy-id to add the Jenkins machine's public key. I received a pong from my ansible ping and successfully SSH'd into the machine. When the run the project I receive the following error:
TASK [Gathering Facts] *********************************************************
fatal: [my_machine]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Host key verification failed.", "unreachable": true}
The Jenkins project is virtually identical with my other projects and VM is the same as my other ones.
In the end I had to add host_key_checking = False into my /etc/ansible/ansible.cfg file but that is just a temporary fix.
Other answers online seem to show that the issue is with the SSH key but I don't believe this is true in my case as I can SSH into the machine. I would like to understand how to get rid of this error message and deploy without not checking the host key.
The remote host is in ~/.ssh/known_hosts.
Any help would be appreciated.
SSH to a remote host will verify the key in the remote host. And if ssh to a new machine, there will question ask you whether to add / trust the key. If you choose "Yes" the key will be saved in the ~/.ssh/known_hosts.
The message "Host key verification failed" implies that the key file of the remote host is not found / changed in the machine that run the Ansible script.
I normally resolve this problem by issuing a ssh to the remote host and add the key to the ~/.ssh/known_hosts file.
For me it helped to disable the host SSH key check in the Jenkins Job Configuration

Error starting Windows docker container with managed service account

I'm trying to test out docker containers running with a domain credential and I'm following these instructions from Microsoft Docs. I have created the Group MSA, which I'm pretty sure I've done correctly as I can run other services on my local computer using it.
I'm testing on a Windows 10 PC, running hyper-v docker containers.
I have built an image called sqltest. When I run the following, the container does evey as expected:
docker run -it sqltest
I tried creating active directory credentials using this command:
New-CredentialSpec -Name developerpcsql -AccountName developerpcsql
Calling Get-CredentialSpec confirms that the json file is created as expected, and it looks right when I open the file.
To run the container, I'm using:
docker run -it --security-opt "credentialspec=file://developerpcsql.json" sqltest
When I do that, it takes about 30 seconds and then I get the following error:
Error response from daemon: container d97082fab98c0205c0072b0a8b79fb7835c8e90828498428b976e378762cc412 encountered an error during Start: failure in a Windows system call: The operation timed out because a response was not received from the Virtual Machine hosting the Container. (0xc0370109).
To confirm it's not my container I've also tried using the standard microsoft/servercore container and get the same error.
Any ideas on what I'm missing?
It looks like it does not work for Windows 10.
Here you can find discussion on the topic
Virtualization-Documentation git repo
It does work as expected for Windows Server 2016 Hosted container.

Resources