Is it possible to run app tests on Android emulator inside Bitbucket pipelines? - docker

I have a Docker image that runs Appium server and Android Emulator. I am totally able to run a container based on that image in my computer (it requires the --privileged flag for that). I am also able to run automated tests in the emulator without any issues.
Now, I would like to run the emulator on Bitbucket pipelines. However, Bitbucket pipelines don't allow starting a Docker container with --privileged (and many other Docker flags) by security reasons. As I understood, this flag is responsible for running the emulator.
I also tried to add to the Docker image to the bitbucket-pipelines.yml file, hoping that I would be able to run the emulator directly in the host, instead of inside a container, but didn't work either, I got empty result from the commands "adb devices" and "emulator -list-avds"
Does anyone know anything that could help achieve this goal? I mean, running automated UI tests on Android in Bitbucket pipelines?

Yes it is possible. There are several answers in the link below, which do not require using the --privileged flag:
android environment using docker and bitbucket pipelines
In the answer of Ming C, there is a linked github page which has a guide on how to run the emulator in the Docker machine:
https://github.com/mingchen/docker-android-build-box#run-an-android-emulator-in-the-docker-build-machine

Related

Testcontainers do not start after replacing Docker Desktop with minikube

I want to make my testcontainers in Java integration tests work with minikube replacing Docker Desktop.
I followed below article to get started:
https://www.atomicjar.com/2021/10/docker-on-windows-and-macos/#minikube
This is what I've got in testcontainers.properties
docker.client.strategy=org.testcontainers.dockerclient.EnvironmentAndSystemPropertyClientProviderStrategy
docker.host=tcp\://192.168.64.2\:2376
docker.cert.path=/Users/username/.minikube/certs
docker.tls.verify=true
Although my docker is up and running, I'm getting following exception:
Caused by: java.lang.IllegalStateException: Could not find a valid Docker environment. Please see logs and check configuration
Can anybody please suggest anything to make it working?
TA
If you are using gradle try -no-daemon flag to use a new daemon. Your old gradle daemon still using your previous testcontainers properties, also restart your IDE if you're running your build inside.
After restarting Minikube and Intellij editor, and updating testcontainer-bom to be the latest - from 1.15 to 1.16.2, I was able to pull some third-party docker images. This means docker is working now.
However, I'm still trying to find a way to work with local images (Other application docker images) for integration testing as it used to work with Docker Desktop.

How to detect in bash script or docker file if the build is taking place on a docker hub server?

So I have a docker file which compiles and installs gtsam with cython wrapper. The docker setup works fine on the local machine but it runs out of memory when building on docker hubs automated build.
I believe I can reduce the memory usage by changing to make -j1, but I'd still like faster builds when performed locally.
I tried accessing the sys/fs/cgroup/memory/memory.limit_in_bytes which shows to be 9223372036854771712 way more then the 2G limit on the servers.
Is there a way to detect if the build is taking place through the automated build and accordingly adjust the -j flag
Docker Hub sets environment variables that are available at build time.
For example, you can test if the build process happens on Docker Hub by checking if SOURCE_BRANCH is set.
The full list of env variables can be found here.

Use QEMU in GitLab CI instead of Docker image

GitLab CI is highly integrated with Docker.
But sometimes, if the project depends on the interaction with Linux kernel like LUKS. It cannot works properly.
The project cryptsetup use Travis-CI instead of GitLab CI even if it host on gitlab.com. I don't know if it is just personal preference of project maintainer.
Hence is it possible to run QEMU or Firecracker instead of Docker?
Is there any equivalent alternative in GitLab than Travis-CI?
This is not yet supported.
A recent (mid-2019) gitlab-org/gitlab-runner issue 4338 mentions katacontainers with firecracker vms as one possible alternative to Docker Machine, for autoscaling.
But this is still being studied.

Use VSCode remote development on docker image without local files

Motivation
As of now, we are using five docker containers (MySQL, PHP, static...) managed by docker-compose. We do only need to access one of them. We now have a local copy of all data inside and sync it from Windows to the container, but that is very slow, VSCode on Windows sometimes randomly locks files causing git rebase origin/master to end in very unpleasant ways.
Desired behaviour
Use VSCode Remote Development extension to:
Edit files inside the container without any mirrored files on Windows
Run git commands (checkout, rebase, merge...)
Run build commands (make, ng, npm)
Still keep Windows as for many developers it is the prefered platform.
Question
Is it possible to develop inside a docker container using VSCode?
I have tried to follow the official guide, but they do seem to require us to have mirrored files. We do also use WSL.
As #FSCKur points out this is the exact scenario VSCode dev containers is supposed to address, but on Windows I've found the performance to be unusable.
I've settled on running VSCode and docker inside a Linux VM on Windows, and have a 96% time saving in things like running up a server and watching code for changes making this setup my preferred way now.
The standardisation of devcontainer.json and being able to use github codespaces if you're away from your normal dev machine make this whole setup a pleasure to use.
see https://stackoverflow.com/a/72787362/183005 for detailed timing comparison and setup details
This is sounds like exactly what I do. My team uses Windows on the desktop, and we develop a containerised Linux app.
We use VSCode dev containers. They are an excellent solution for the scenario.
You may also be able to SSH to your docker host and code on it, but in my view this is less good because you want to keep all customisation "contained" - I have installed a few quality-of-life packages in my dev container which I'd prefer to keep out of my colleague's environments and off the docker host.
We have access to the docker host, so we clone our source on the docker host and mount it through. We also bind-mount folders from the docker host for SQL and Redis data - but that could be achieved with docker volumes instead. IIUC, the workspace folder itself does have to be a bind-mount - in fact, no alternative is allowed in the devcontainer.json file. But since you need permission anyway on the docker daemon, this is probably achievable.
All source code operations happen in the dev container, i.e. in Linux. We commit and push from there, we edit our code there. If we need to work on the repo on our laptops, we pull it locally. No rcopy, no SCP - github is our "sync" mechanism. We previously used vagrant and mounted the source from Windows - the symlinks were an absolute pain for us, but probably anyone who's tried mounting source code from Windows into Linux will have experienced pain over some element or other.
VSCode in a dev container is very similar to the local experience. You will get bash in the terminal. To be real, you probably can't work like this without touching bash. However, you can install PSv7 in the container, and/or a 'better' shell (opinion mine) such as zsh.

Docker swarm for usb devices

I'm trying to build a distributed python application that connects several hosts with android devices over usb. These hosts then connect over TCP to a central broker for job disbursement. I'm currently tackling the problem of supporting multiple python builds for developers (linux/windows) as well as production (runs an older OS which requires it's own build of python). On the surface, docker seems like a good fit here as it would allow me to support a single python build.
However, docker doesn't seem suited well to working with external hardware. There is the --device option to pass a specific device, but that requires that the device be present before the docker run command and it doesn't persist across device reboots. I can get around that problem with --privileged but docker swarm currently does not support that (see issue 24862) so I'd have to manually setup the service on each of the hosts, which would not only be a pain, but I'd lose the niceness of swarm's automatic deployment and rollout.
Does anyone have any suggestions on how to make something like this work with docker, or am I just barking up the wrong tree here?
you can try developing on docker source code, and build docker from source code to support your requirement.
There is a hack, how to do that. In the end of this issue:
https://github.com/docker/swarmkit/issues/1244

Resources