How to copy the docker images to IBM zCX server from USS - docker

I am trying to build z/OSMF workflow to copy my application docker images from USS into zCX server. There are two approaches -
Push docker image to zCX Private/Public docker Registry.
Copy the docker tar to ZCX and load into docker repository using docker load command
I have neither zCX Docker Registry nor a linux system. My docker images are available in tar format on USS. There is no SFTP available in zCX.
Is there any way to get my docker images in zCX?
Is there any z/OSMF built-in step to perform this activity?
I read the documentation - http://www.redbooks.ibm.com/redbooks/pdfs/sg248457.pdf. It is not so clear to get started.

If you have set up zCX, and set up a default userid, then presumably you can ssh from USS env in zOS to the zCX env, with no pwd requried.
Which means you can do cmd like this to binary transfer a file to zCX
ZCX # SC:/tmp>echo 'put /tmp/hw.txt' | sftp -P 8022 admin#cn02.ibm.com:/tmp
The authenticity of host '[cn02.ibm.com]:8022 ([129.01.01.01]:8022)' can't be established.
ECDSA key fingerprint is SHA256:CplCw7We+4b/gOK8ngj9XlY3p6uFNZXRabmvBqu6r3o.
Are you sure you want to continue connecting (yes/no)? yes
FOTS2274 Warning: Permanently added '[cn02.ibm.com]:8022,[129.01.01.01]:8022' (ECDSA) to the list of known hosts.
Connected to cn02.ibm.com.
Changing to: /tmp
sftp> put /tmp/hw.txt
Uploading /tmp/hw.txt to /tmp/hw.txt
/tmp/hw.txt 100% 20 0.0KB/s 00:00
In zCX, when logon:
admin#cn02:/tmp$ pwd
/tmp
admin#cn02:/tmp$ ls -lrt
total 4
-rwxrwxr-x 1 admin admin 20 May 15 12:37 hw.txt
admin#cn02:/tmp$ cat hw.txt
ȅ▒▒▒#▒▒▒▒#▒▒▒#z`]admin#cn02:/tmp$
As SFTP does binary transfer by default, the contents of the hw.txt file which are EBCDIC in USS, have been transferred as is to zCX
As zCX is ASCII, when you do cat on hw.txt in zCX get unreadable output, since the ASCII env is trying to output data that is EBCDIC
As you want to transfer a tar file from USS to zCX, you do no want EBCDIC to ASCII conversion to happen, you want sftp to do asis ( binary transfer ) , which is the default

There is sftp in zcx
After I logged into zCX, did sftp cmd and got:
admin#cn02:~$ sftp
usage: sftp [-46aCfpqrv] [-B buffer_size] [-b batchfile] [-c cipher]
[-D sftp_server_path] [-F ssh_config] [-i identity_file] [-l limit]
[-o ssh_option] [-P port] [-R num_requests] [-S program]
[-s subsystem | sftp_server] host
sftp [user#]host[:file ...]
sftp [user#]host[:dir[/]]
sftp -b batchfile [user#]host

In response to the original question 2 above - there currently is no z/OSMF workflow to upload tarballs of images. It is an interesting idea though. Push/pull from a registry is the more common way of moving images around.
You may want to consider staying in touch with a project called Zorrow at the Open Mainframe project (https://www.openmainframeproject.org/projects/zorow). We're in the early stages of creating a community there where people would share this kind of a workflow.

Related

docker command not available in custom pipe for BitBucket Pipeline

I'm working on a build step that handles common deployment tasks in a Docker Swarm Mode cluster. As this is a common problem for us and for others, we've shared this build step as a BitBucket pipe: https://bitbucket.org/matchory/swarm-secret-pipe/
The pipe needs to use the docker command to work with a remote Docker installation. This doesn't work, however, because the docker executable cannot be found when the pipe runs.
The following holds true for our test repository pipeline:
The docker option is set to true:
options:
docker: true
The docker service is enabled for the build step:
main:
- step:
services:
- docker: true
Docker works fine in the repository pipeline itself, but not within the pipe.
Pipeline log shows the docker path being mounted into the pipe container:
docker container run \
--volume=/opt/atlassian/pipelines/agent/build:/opt/atlassian/pipelines/agent/build \
--volume=/opt/atlassian/pipelines/agent/ssh:/opt/atlassian/pipelines/agent/ssh:ro \
--volume=/usr/local/bin/docker:/usr/local/bin/docker:ro \
--volume=/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes:/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes \
--volume=/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes/matchory/swarm-secret-pipe:/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes/matchory/swarm-secret-pipe \
--workdir=$(pwd) \
--label=org.bitbucket.pipelines.system=true \
radiergummi/swarm-secret-pipe:1.3.7#sha256:baf05b25b38f2a59b044e07f4ad07065de90257a000137a0e1eb71cbe1a438e5
The pipe is pretty standard and uses a recent Alpine image; nothing special in that regard. The PATH is never overwritten. Now for the fun part: If I do ls /usr/local/bin/docker inside the pipe, it shows an empty directory:
ls /usr/local/bin
total 16K
drwxr-xr-x 1 root root 4.0K May 13 13:06 .
drwxr-xr-x 1 root root 4.0K Apr 4 16:06 ..
drwxr-xr-x 2 root root 4.0K Apr 29 09:30 docker
ls /usr/local/bin/docker
total 8K
drwxr-xr-x 2 root root 4.0K Apr 29 09:30 .
drwxr-xr-x 1 root root 4.0K May 13 13:06 ..
ls: /usr/local/bin/docker/docker: No such file or directory
As far as I understand pipelines and Docker, /usr/local/bin/docker should be the docker binary file. Instead, it appears to be an empty directory for some reason.
What is going on here?
I've also looked at other, official, pipes. They don't do anything differently, but seem to be using the docker command just fine (eg. the Azure pipe).
After talking to BitBucket support, I solved the issue. As it turns out, if the docker context is changed, any docker command is sent straight to the remote docker binary, which (on our services) lives at a different path than in BitBucket Pipelines!
As we changed the docker context before using the pipe, and the docker instance mounted into the pipe still has the remote context set, but the pipe searches for the docker binary at another place, the No such file or directory error is thrown.
TL;DR: Always restore the default docker host/context before passing control to a pipe, e.g.:
script:
- export DEFAULT_DOCKER_HOST=$DOCKER_HOST
- unset DOCKER_HOST
- docker context create remote --docker "host=ssh://${DEPLOY_SSH_USER}#${DEPLOY_SSH_HOST}"
- docker context use remote
# do your thing
- export DOCKER_HOST=$DEFAULT_DOCKER_HOST # <------ restore the default host
- pipe: matchory/swarm-secret-pipe:1.3.16

Hugo server in Docker container not reachable in Windows 10

A few days ago I started a little side project: Dockerizing my Hugo build on my Windows 10 machine. The Hugo container itself, which runs as a Linux container, was the easy part and seems to work (at least by looking at the console output
$ docker run --rm -it -p 1313:1313/tcp hugo:latest
Building sites …
Replace Autoprefixer browsers option to Browserslist config.
Use browserslist key in package.json or .browserslistrc file.
Using browsers option cause some error. Browserslist config
can be used for Babel, Autoprefixer, postcss-normalize and other tools.
If you really need to use option, rename it to overrideBrowserslist.
Learn more at:
https://github.com/browserslist/browserslist#readme
https://twitter.com/browserslist
WARN 2019/11/23 14:05:35 found no layout file for "HTML" for "section": You should create a template file which matches Hugo Layouts Lookup Rules for this combination.
| DE | EN
+------------------+----+----+
Pages | 9 | 7
Paginator pages | 0 | 0
Non-page files | 0 | 0
Static files | 25 | 25
Processed images | 0 | 0
Aliases | 1 | 0
Sitemaps | 2 | 1
Cleaned | 0 | 0
Total in 680 ms
Watching for changes in /app/{assets,content,i18n,layouts,static}
Watching for config changes in /app/config.yaml
Environment: "development"
Serving pages from memory
Running in Fast Render Mode. For full rebuilds on change: hugo server --disableFastRender
Web Server is available at http://localhost:1313/ (bind address 127.0.0.1)
Press Ctrl+C to stop
My Dockerfile the I run looks like this
FROM node:13-alpine
ENV VERSION 0.59.1
EXPOSE 1313
RUN apk add --no-cache git openssl py-pygments libc6-compat g++ curl
RUN curl -L https://github.com/gohugoio/hugo/releases/download/v${VERSION}/hugo_extended_${VERSION}_Linux-64bit.tar.gz | tar -xz \
&& cp hugo /usr/bin/hugo \
&& apk del curl \
&& hugo version
WORKDIR /app
COPY assets assets
COPY content content
COPY i18n i18n
COPY layouts layouts
COPY static static
COPY package.json package.json
COPY postcss.config.js postcss.config.js
COPY config.yaml config.yaml
RUN yarn
CMD [ "hugo", "server", "--buildDrafts","--watch" ]
The hard part for me now is to connect to the running Hugo server on my host's systems (Windows 10 Pro) browser.
I basically tried everything: localhost:1313 & http://172.17.0.2:1313/ (the container IP I get by running docker inspect <container ID>), with firewall enabled and disabled, but nothing seems to work.
To verify that it should work I ran hugo server --buildDrafts --watch directly on my host system and can access the server just fine. I also invested several hours in reading up on the issue, but none of the solutions seem to work in my case.
How can I solve this issue?
Here's your problem:
Web Server is available at http://localhost:1313/ (bind address 127.0.0.1)
Hugo is binding to the loopback address (127.0.0.1) inside the container. It does this by default because hugo serve is meant strictly as a development tool, not for actually serving pages in production. In order to avoid any security issues, it defaults to binding to the loopback interface so that you can only connect to it from the local machine.
Unfortunately, in the context of a container, localhost means "this container". So with Hugo bound to 127.0.0.1 inside a container you'll never be able to connect to it.
The solution is to provide a different bind address using the --bind option. You probably want to modify your Dockerfile so that it looks like:
CMD [ "hugo", "server", "--buildDrafts", "--watch", "--bind", "0.0.0.0" ]
This will cause hugo to bind to "all interfaces" inside the container, which should result in it working as you expect.

Setting up Gitlab using Docker on Windows host, issue with shared folders

TLDR;
Does anyone know how to solve the "Failed asserting that ownership of "/var/opt/gitlab/git-data" was git" error?
Background:
I want to set up the Gitlab Docker on WindowsServer2012R2 running Docker toolbox, version 17.04.0-ce, build 4845c56.
Issue/Question
I can't get the shared folder to work properly on the D drive of the server. I read that I needed to add the folder to the VirtualBox VM, which I did via the settings/shared folder menu in the VB GUI. I set a name "gitlab" to the path "D:\data\gitlab" then checked auto-mount, make permanent, and set it to full access.
I started the docker machine and ran "docker-machine ssh $machine-name". I noticed that there was no /media directory and so I added a folder at the home directory (/home/docker/gitlab) and then mounted the shared folder using the following command I found in several forums:
sudo mount -t vboxsf gitlab /home/docker/gitlab
At this point I can add files to the Windows host directory or the Docker VM and it seems to work fine and the test files show up.
Now when I spin up the Gitlab Docker image, I use the following command modified from their documentation:
docker run --detach --hostname gitlab.example.com --publish 80:80 --name gitlab --volume /home/docker/gitlab:/etc/gitlab:Z --volume /home/docker/gitlab/logs:/var/log/gitlab:Z --volume /home/docker/gitlab/data:/var/opt/gitlab:Z gitlab/gitlab-ce
Now I know that it appears to be writing to the shared drive, because all of these files are generated, but then it crashes after a few seconds and I receive the following error log.
Error Log:
Thank you for using GitLab Docker Image!
Current version: gitlab-ce=9.3.6-ce.0
Configure GitLab for your system by editing /etc/gitlab/gitlab.rb file
And restart this container to reload settings.
To do it use docker exec:
docker exec -it gitlab vim /etc/gitlab/gitlab.rb
docker restart gitlab
For a comprehensive list of configuration options please see the Omnibus GitLab readme
https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/README.md
If this container fails to start due to permission problems try to fix it by executing:
docker exec -it gitlab update-permissions
docker restart gitlab
Installing gitlab.rb config...
Generating ssh_host_rsa_key...
Generating public/private rsa key pair.
Your identification has been saved in /etc/gitlab/ssh_host_rsa_key.
Your public key has been saved in /etc/gitlab/ssh_host_rsa_key.pub.
The key fingerprint is:
SHA256:GyFlf9tl7ZuEbuE+dwZUYiyahdsRzpC1T7kwyUvoD+o root#gitlab.example.com
The key's randomart image is:
+---[RSA 2048]----+
| o .+oo |
| o .o*+o+.o|
| . . o*#+oo+|
| . o+o.Oo= |
| S o o++..|
| + oo + o|
| o .+ + |
| . o. .o|
| E .o..|
+----[SHA256]-----+
Generating ssh_host_ecdsa_key...
Generating public/private ecdsa key pair.
Your identification has been saved in /etc/gitlab/ssh_host_ecdsa_key.
Your public key has been saved in /etc/gitlab/ssh_host_ecdsa_key.pub.
The key fingerprint is:
SHA256:Kb99jG8EtMuTSdIuqBT3GLeD1D0wwTEcQhKgVJUlBjs root#gitlab.example.com
The key's randomart image is:
+---[ECDSA 256]---+
| .o+=*=+=+ |
|.. oo..=.. |
|. E . * . |
| o + +.B |
| +.BS* * |
| . +o= B . |
| . . .o = |
| . o. + |
| . .+. |
+----[SHA256]-----+
Generating ssh_host_ed25519_key...
Generating public/private ed25519 key pair.
Your identification has been saved in /etc/gitlab/ssh_host_ed25519_key.
Your public key has been saved in /etc/gitlab/ssh_host_ed25519_key.pub.
The key fingerprint is:
SHA256:lVxpu0UoyNPWVY6D9c+m/bUTyvKP6vuR4cTOYwQ0j+U root#gitlab.example.com
The key's randomart image is:
+--[ED25519 256]--+
| . o +.=o..|
| +.=o#o.+ |
| o+=.Eo o|
| . + .o.|
| S B +|
| B o= |
| .Oo +|
| ..o+.+|
| .+*+.oo|
+----[SHA256]-----+
Preparing services...
Starting services...
Configuring GitLab package...
/opt/gitlab/embedded/bin/runsvdir-start: line 24: ulimit: pending signals: cannot modify limit: Operation not permitted
/opt/gitlab/embedded/bin/runsvdir-start: line 34: ulimit: max user processes: cannot modify limit: Operation not permitted
/opt/gitlab/embedded/bin/runsvdir-start: line 37: /proc/sys/fs/file-max: Read-only file system
Configuring GitLab...
================================================================================
Error executing action `run` on resource 'ruby_block[directory resource: /var/opt/gitlab/git-data]'
================================================================================
Mixlib::ShellOut::ShellCommandFailed
------------------------------------
Failed asserting that ownership of "/var/opt/gitlab/git-data" was git
---- Begin output of set -x && [ "$(stat --printf='%U' $(readlink -f /var/opt/gitlab/git-data))" = 'git' ] ----
STDOUT:
STDERR: + readlink -f /var/opt/gitlab/git-data
+ stat --printf=%U /var/opt/gitlab/git-data
+ [ UNKNOWN = git ]
---- End output of set -x && [ "$(stat --printf='%U' $(readlink -f /var/opt/gitlab/git-data))" = 'git' ] ----
Ran set -x && [ "$(stat --printf='%U' $(readlink -f /var/opt/gitlab/git-data))" = 'git' ] returned 1
Cookbook Trace:
---------------
/opt/gitlab/embedded/cookbooks/cache/cookbooks/gitlab/libraries/storage_directory_helper.rb:124:in `validate_command'
/opt/gitlab/embedded/cookbooks/cache/cookbooks/gitlab/libraries/storage_directory_helper.rb:112:in `block in validate'
/opt/gitlab/embedded/cookbooks/cache/cookbooks/gitlab/libraries/storage_directory_helper.rb:111:in `each_index'
/opt/gitlab/embedded/cookbooks/cache/cookbooks/gitlab/libraries/storage_directory_helper.rb:111:in `validate'
/opt/gitlab/embedded/cookbooks/cache/cookbooks/gitlab/libraries/storage_directory_helper.rb:87:in `validate!'
/opt/gitlab/embedded/cookbooks/cache/cookbooks/gitlab/definitions/storage_directory.rb:35:in `block (3 levels) in from_file'
Resource Declaration:
---------------------
# In /opt/gitlab/embedded/cookbooks/cache/cookbooks/gitlab/definitions/storage_directory.rb
26: ruby_block "directory resource: #{params[:path]}" do
27: block do
28: # Ensure the directory exists
29: storage_helper.ensure_directory_exists(params[:path])
30:
31: # Ensure the permissions are set
32: storage_helper.ensure_permissions_set(params[:path])
33:
34: # Error out if we have not achieved the target permissions
35: storage_helper.validate!(params[:path])
36: end
37: not_if { storage_helper.validate(params[:path]) }
38: end
39: end
Compiled Resource:
------------------
# Declared in /opt/gitlab/embedded/cookbooks/cache/cookbooks/gitlab/definitions/storage_directory.rb:26:in `block in from_file'
ruby_block("directory resource: /var/opt/gitlab/git-data") do
params {:path=>"/var/opt/gitlab/git-data", :owner=>"git", :group=>nil, :mode=>"0700", :name=>"/var/opt/gitlab/git-data"}
action [:run]
retries 0
retry_delay 2
default_guard_interpreter :default
block_name "directory resource: /var/opt/gitlab/git-data"
declared_type :ruby_block
cookbook_name "gitlab"
recipe_name "gitlab-shell"
block #<Proc:0x000000054a99a8#/opt/gitlab/embedded/cookbooks/cache/cookbooks/gitlab/definitions/storage_directory.rb:27>
not_if { #code block }
end
Platform:
---------
x86_64-linux
Does anyone know how to solve the "Failed asserting that ownership of "/var/opt/gitlab/git-data" was git" error? I'm still somewhat new to Docker/setting up Gitlab, so it's very possible I could have overlooked something simple. I've spent several hours Googling this, and it seems that others also have a lot of issues getting shared folders to work from Windows using the Docker Toolbox, so hopefully this will help others as well.
Background
One solution (maybe not the best) for those of us stuck in a world without native docker, is to use vdi drives and shared folders. The vdi drive can live on an drive we want (which is important if you don't want to use the C drive) and is used to allow the Gitlab docker the ability to chown anything it wants, so this is where we'll store the persistent volumes. The downside is that a vdi is not as transparent as a simple shared folder, thus for backups, a shared folder makes things a little bit easier/transparent.
Disclaimer
I'm not an expert on any of this, so please use caution and take what I say with a grain of salt.
Steps to perform
Create a new vdi drive and shared folder on any drive you'd like
Turn off your docker machine you want to use for gitlab
In virtualbox go into the settings on your docker-machine, then Storage, and click Add Hard Disk icon, then Create new disk
Select VDI (VirtualBox Disk Image) and click Next
Select Dynamically allocated and click Next
Select the name and location you want to store the vdi by clicking the folder with green carrot symbol, then select the max size the vdi can grow to, and click Create
Now in the settings menu, switch to Shared Folders and click Adds new shared folder icon
Create a gitlabbackups folder to where ever you want and select Auto-mount and Make Permanent
Now partition and format the drive
Start/enter the docker machine (either use VBox window or docker-machine ssh <your docker machine name> from cmd prompt)
Run fdisk -l to list the available drives, and if you've only mounted the one extra vdi drive, you should see something like /dev/sdb
The next steps are irreversible, so perform it at your own discretion: enter command fdisk /dev/sdb then n for new partition, p for primary, and 1
Now format the new partition (you might need sudo as well): mkfs.ext4 /dev/sdb1
Run docker with persistent volumes on second vdi and backups in shared folder
Sample Dockerfile:
FROM gitlab/gitlab-ce:latest
RUN apt-get update
RUN apt-get install -y cron
# Add a cron job to backup everyday
RUN echo "0 5 * * * /opt/gitlab/bin/gitlab-rake gitlab:backup:create STRATEGY=copy CRON=1" | crontab -
# For an unknown reason, the cron job won't actually run unless cron is restarted
CMD service cron restart && \
/assets/wrapper
Sample docker-compose.yml:
version: "3.0"
services:
gitlab:
build: .
restart: always
ports:
- "80:80"
volumes:
# These volumes are on the vdi we created above
- "/mnt/sdb1/etc/gitlab/:/etc/gitlab"
- "/mnt/sdb1/var/log/gitlab:/var/log/gitlab"
- "/mnt/sdb1/var/opt/gitlab:/var/opt/gitlab"
# This volume sits in the shared folder defined above
- "/gitlabbackups:/var/opt/gitlab/backups"
cap_add:
# These seem to be necessary for the mounted drive to work properly
# https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities
- SYS_ADMIN
- DAC_READ_SEARCH
Because there seems to be an issue with auto mounting the vdi, use a startup script, for example (assuming you used a D drive, just replace anything inside <...> as needed), sample run.bat:
#cd /d D:\<path to docker-compose.yml, assuming it's on the D drive>
#docker-machine start <docker machine name>
#FOR /f "tokens=*" %%i IN ('docker-machine env <docker machine name>') DO #%%i
#docker-machine ssh <docker machine name> sudo mount /dev/sdb1 /mnt/sdb1
#docker-compose build
#docker-compose up -d
#REM If the docker machine was completely off, running only 'docker-compose up -d' will
#REM not mount the volumes properly. Stopping and restarting the container results in
#REM the volumes mounting properly.
#docker stop <gitlab container name>
#docker start <gitlab container name>
#pause
Note: the gitlab container name can be found by running docker-compose up once and then docker ps -a to check it, but it's usually follows the convention <directory compose file is in>_<name in the compose file, e.g. gitlab here>_1
Assuming all went well and you change the stuff in the <...>'s above for your situation, you should be able to run the batch file and have gitlab up and running in such a way that it stores everything on the alternate drive, persistent working files in the vdi (to get around VBox POSIX limitations), and backups transparently stored in the shared folder.
Hope this helps other poor souls that don't have access to native docker yet.

How to know if my program is completely started inside my docker with compose

In my CI chain I execute end-to-end tests after a "docker-compose up". Unfortunately my tests often fail because even if the containers are properly started, the programs contained in my containers are not.
Is there an elegant way to verify that my setup is completely started before running my tests ?
You could poll the required services to confirm they are responding before running the tests.
curl has inbuilt retry logic or it's fairly trivial to build retry logic around some other type of service test.
#!/bin/bash
await(){
local url=${1}
local seconds=${2:-30}
curl --max-time 5 --retry 60 --retry-delay 1 \
--retry-max-time ${seconds} "${url}" \
|| exit 1
}
docker-compose up -d
await http://container_ms1:3000
await http://container_ms2:3000
run-ze-tests
The alternate to polling is an event based system.
If all your services push notifications to an external service, scaeda gave the example of a log file or you could use something like Amazon SNS. Your services emit a "started" event. Then you can subscribe to those events and run whatever you need once everything has started.
Docker 1.12 did add the HEALTHCHECK build command. Maybe this is available via Docker Events?
If you have control over the docker engine in your CI setup you could execute docker logs [Container_Name] and read out the last line which could be emitted by your application.
RESULT=$(docker logs [Container_Name] 2>&1 | grep [Search_String])
logs output example:
Agent pid 13
Enter passphrase (empty for no passphrase): Enter same passphrase again: Identity added: id_rsa (id_rsa)
#host SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.6
#host SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.6
parse specific line:
RESULT=$(docker logs ssh_jenkins_test 2>&1 | grep Enter)
result:
Enter passphrase (empty for no passphrase): Enter same passphrase again: Identity added: id_rsa (id_rsa)

How to netcat multiple files without tar?

Currently I am transporting files back and forth over telnet and I would like to send multiple files at once.
However, my target platform (a Blackfin processor) does not have "tar" enabled in its kernel/busybox configuration (a uClinux distribution).
As you all know the normal command is:
nc -p 12345 -l | tar -x
tar -c * | nc 192.168.0.100 12345 # with x.100 the robot IP address
How can I send multiple files using netcat without using tar?
Please, consider that I cannot easily add binaries on the platform. It would be best to do it with basic utilities and/or shell scripts.
Finally managed myself to do this, it can be done!
Here $l> stands for your machine with IP 192.168.0.10. And $e> is done on the embedded device without tar, in my case a robot. It uses old-fashioned dd which is able to copy an entire disk.
$l> nc -p 12345 -l | dd obs=4K of=/tmp/file.jffs2
$e> dd ibs=4K if=/dev/mtdblock2 | nc 192.168.0.10 12345
This is it, but because not everybody knows how to read a filesystem that is in this form, this is how you mount it:
file /tmp/file.jffs2
/tmp/file.jffs2: Linux jffs2 filesystem data little endian
sudo su #careful
mknod /tmp/mtdblock0 b 31 0
modprobe loop
losetup /dev/loop0 /tmp/file.jffs2
modprobe mtdblock
modprobe block2mtd
echo "/dev/loop0,128KiB" > /sys/module/block2mtd/parameters/block2mtd
modprobe jffs2
mkdir /media/robot
mount -t jffs2 /tmp/mtdblock0 /media/robot
Ctrl-D #back as normal user
And yes, you need the loopback device, or else:
sudo mount -t jffs2 /tmp/file.jffs2 /media/robot
mount: /tmp/file.jffs2 is not a block device (maybe try `-o loop'?)
Logically, it is a file (chars), and not a block device. The only thing I do not know is if there is a syntax for dd in which the command on the embedded device, can only select a subset of the filesystem to be included. I don't think this is likely because that would require dd to understand jffs2 while its strength is its raw byte copying behaviour.

Resources