I'm using VS 2017 15.7.3 with Docker enabled for an ASP.net core 2.1 project that has been committed to a private gitlab server I'm running on site. I turned on the registry features of GitLab and I can connect to and login to the server from my box.
So, with regard to VS2017, I created a new solution (Solution1) that shares its name with my GitLab repo. I have configured the publish settings for this project with my credentials and the push location of https://mygitlabserver.example.com:4567/solution1/solution1/.
The profile type I selected is Container Registry->Custom. I'm trying to push out an image for the first project in the solution (Project1). I have not modified the VS project properties Package tab settings, so the package ID remains the same as it started - Project1.
When I publish, I get a generic error in a tmp file in %LOCALAPPDATA%\Temp whose contents are as follows:
System.Exception: Running the docker.exe push command failed.
at Microsoft.VisualStudio.Web.Azure.Publish.ContainerRegistryProfileVisual.<PostPublishAsync>d__24.MoveNext()
I confirmed I can tag this image with Docker on the command line with the above URL and push it out successfully. I'm not sure if VS2017 has some other settings I need to use, but the documentation is light for working with a private server - they seem to be pushing Azure and I'm finding very little documentation outside of this.
Can anyone give any guidance or the location of more detailed logfiles?
Location of log files
C:\Users\User\AppData\Local\Temp
Related
I'm trying to as much of my CI/CD process automated. Here is what I've got at this point:
Azure App Service using a docker container.
Azure DevOps code repository.
Right now using Docker Hub as the repo for my docker container. Can move to Azure later.
I can push code to the repo, it builds the new image, pushes it to Docker Hub, once thats done it gets deployed to the Azure App Service just fine.
Where I'm running into issues is we have a Laravel app that is being deployed via this container. With Laravel there is an .env file that I don't want to push up to the code repository. How would one go about moving a file into the container once it's been deployed?
All I've been finding is how to do it via SSH or through the startup command, but all the examples assume the file is on the image.
Thanks for any tips/tricks/links/etc!! I've got a feeling this is one of those "ahh that was easy" things and what I'm searching just isnt the right verbiage.
I've recently joined a new company which already has a version of Google Tag Manager server-side up and running. I am new to Google Cloud Platform (GCP), and I have not been able to find the supposed docker image in the image repository for our account. Or, at least I am trying to figure out how to check if there is one and how do I correlate its digest to what image we've deployed that is located at gcr.io/cloud-tagging-10302018/gtm-cloud-image.
I've tried deploying it both automatically provisioned in my own cloud account and also running the manual steps and got it working. But I can't for the life of me figure out how to check which version we have deployed at our company as it is already live.
I suspect it is quite a bit of an old version (unless it auto-updates?), seeing as the GTM server-side docker repository has had frequent updates.
Being new to the whole container imaging with docker, I figured I could use Cloud shell to check it that way, but it seems when setting up the specific Appengine instance with the shell script provided (located here), it doesn't really "load" a docker image as if you'd deployed it yourself. At least I don't think so, because I can't find any info using docker commands in the Cloud shell of said GCP project running the flexible Appengine environment.
Any guidance on how to find out which version of GTM server-side is running in our Appengine instance?
To check what docker images your App Engine Flex uses is by ssh to the instance. To ssh to your App Engine instances is by going to the instance tab then choosing the correct service and version then click the ssh button or you can access it by using this gcloud command on your terminal or cloud shell:
gcloud app instances ssh "INSTANCE_ID" --service "SERVICE_NAME" --version "VERSION_ID" --project "PROJECT_ID"
Once you have successfully ssh to your instance, run docker images command to list your docker images
I created Container registry and then push the docker image of my web app to that registry. Created container instance and it is working fine.
Now i have to deploy this image to the Web App. There are two option which i found.
First i can choose 'Deploy to web app' option directly where docker image is stored.
Second i can create 'Web App for container' resource using the same docker image.
a.) When i tried first option : After deployed successfully when i run the web app it displayed default page. When i connected to the ftp to check the files in wwwroot folder. Only 'hostingstart.html' file is present.
b.) When i tried second option : After deployed successfully when i run the web app it displayed message displayed on page 'The Web App's container could not start. Please try again in few minutes. If you are an administrator of this Web App please verify your container settings and go to Azure Portal to review the diagnostic logs'
When i connected to the ftp to check the files in wwwroot folder. Only 'hostingstart.html' file is present.
Docker image has no issue as i am able to run it locally and on container instance.
My first question is : Is the above two methods are same thing beacuse in first method it looks like normal web app with kudu/app service editor option available but in second method i do not found kudu/app service editor support.
Second question is : I want to implement web app for container so only second option is the one i should go for?
Any idea what i am missing?
As shared by the original poster in the comments, retrying pushing the image to the container worked.
I have a Google project on which i am one of the owner. It was created by another developer and he added me as the owner. Now within that i created a VM instance within which i installed docker. After installing docker, i created an image of my node.js application by providing the git repository as the argument.
However after setting the gcloud config parameters, its giving me 500 error while trying to push that docker image
Error: Status 500 trying to push repository <project-id>/<image-name>: "Internal Error."
My gcloud and docker version information :-
Google Cloud SDK 0.9.71
Docker version 1.7.1, build 786b29d
you probably were hit by the Google Cloud Storage outage that was going on last night: https://status.cloud.google.com/incident/storage/16027
would you mind trying again?
Sorry for the inconvenience!
Jeffrey van Gogh
Google Container Registry Team
I am trying to use the ArtifactDeployer plugin to copy the artifacts from WORKSPACE/jobs/ directory into a remote directory on the windows 7 machine .The Jenkins machine OS is linux
However Jenkins never manages to succeed. Throwing errors like:
[ArtifactDeployer] - Starting deployment from the post-action ... [ArtifactDeployer] - [ERROR] - Failed to deploy. Can't create the directory ... Build step [ArtifactDeployer] - Deploy artifacts from workspace to remote directories' changed build result to FAILURE
I am not sure how to use the Remote Directory parameter.
Please check the sample code for how I am trying to specify the remote directory
remote Directory - \ip address of that machine\users\public
Is it possible to copy the artifacts which is on linux machine to windows 7 machine?
Please let me know how to specify the remote directory.
Reading the Plugin page doesn't seem to be very helpful when it comes to configuring it. The text seem to hint that you need to have local access (from the node where the job is running) to the (remote) folder you want to deploy too. For a first test, use a local directory (on your Linux box) to see if you get it to work. Second, the correct way to address a windows share is \\servername\sharename\subdirs. Remember that you might need to login to the share.
You might need to install samba or cifs to connect to the windows share from your linux system. There is also a setting in Windows that determines whether your windows box will accept connections to aliases. If that is not the case, you need to use the hostname in order to access the share. So IP and any alias for the server will not work then.
e.g
hostname: RTS3524
alias: JENKINSREPO
ip: 192.168.15.33
share: temp
For the example above, only \\RTS3524\temp will work but \\192.168.15.33 will not.