Linking Policies of copied Usecase in different space of IBM Blueworks Live? - business-process-management

I have created one Usecase and linked policies like BR and Error Msg to that Usecase. Then I copied that Usecase to another space.
My trouble is that the copied Usecase is linked to the policies of a different space. How can I use policies in the same space and link to those?

When you say "Usecase" I assume you are talking about a Process Blueprint.
If you copy a Process Blueprint, only the process is copied. None of the linked assets like Policies are going to be copied with it, so the copy will point to the same Policy as the original.
If instead you want to create copies of everything, you can copy the space instead of just copying the process. Assuming all the linked assets (i.e. Policies) are in the same space as your process, this will create copies of everything and the copy of your process will be correctly linked to the copy of the policies.
If all the assets are not in the same space, one workaround is to temporarily move the items you are wanting to copy into a temp space, copy that temp space and then move everything back to the original location.

Related

How to list all files from the build context that affect the contents of the image?

Is it possible to list all files that get copied into the image from the build context, or affect the final contents of the image in any other way?
I need this for dependency tracking; I am sculpting a build system for a project that involves building multiple images and running containers from them in the local dev environment. I need this to be optimized for rapid code-build-debug cycle, and therefore I need to be able to avoid invoking docker build as tightly as possible. Knowing the exact set of files in the build context that end up affecting the image will allow me to specify those as tracked dependencies for the build step that invokes docker build, and avoid unnecessary rebuilds.
I don't need to have this filelist generated in advance, though that is prefereable. If no tool exists to generate it in advance, but there is a way to obtain it from a built image, then that's OK too; the build tool I use is capable of recording dynamic dependencies discovered by a post-build step.
Things that I am acutely aware of, and I still make an informed decision that pursuing this avenue is worthwile:
I know that the number of dependencies thus tracked can be huge-ish. I believe the build tool can handle it.
I know that there are other kinds of dependencies for a docker image besides files in the build context. This is solved by also tracking those dependencies outside of docker build. Unlike files from the build context, those dependencies are either much fewer in number (i.e. files that the Dockerfile's RUN commands explicitly fetch from the internet), or the problem of obtaining an exhaustive list of such dependencies is already solved (e.g. dependencies obtained using a package manager like apt-get are modeled separately, and the installing RUNs are generated into the Dockerfile from the model).
Nothing is copied to the image unless you specifically say so. So, check your Dockerfile for COPY statements and you will know what files from the build context are added to the image.
Notice that, in the event you have a COPY . ., you might have a .dockerignore file in the build context with files you don't want to copy.
I don't think what's you're looking for would be useful even if it was possible. A list of all files in the previously built image wouldn't factor in new files, and it would be difficult to differentiate new files that affect the build from new files that would be ignored.
It's possible that you could parse the Dockerfile, extract every COPY and ADD command, run the current files through a hashing process to identify if they changed from the hash in the image history (you would need to match docker's hashing algorithm which includes details like file ownership and permissions), and then when that hash doesn't match you would know the build needs to run again. You could look at creating a custom buildkit syntax parser, or reuse the low level buildkit code to build your own context processor.
But before you spend too much time trying to implement the above code, realize that it already exists, as docker build. Rather than trying to avoid running a build, I'd focus on getting the build to utilize the build cache so new builds skip all unchanged steps, possibly generating the exact same image id.

In "eshoponcontainers", most of the dockerfiles have copy(all csproj) and restore, does it not make overhaed on container?

I can see comment in each dockfile
Keep the project list and command dotnet restore identical in all Dockfiles to maximize image cache utilization
But I have confusion with,
it would build the fast(due to caching) but does it not take extra space in
container FS.
And in future if I add new project in solution should I make to
change every dockerfile.
Here https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/src/Services/Basket/Basket.API/Dockerfile basket-api docfile have copy command on projects.
The reason for doing this is to take advantage of Docker's layer caching. Yes, there is maintenance overhead to ensure that the list here reflects the actual set of project files that are defined in the repo.
The caching optimization comes into play for iterative development scenarios (inner loop). In those scenarios, you're typically making changes to source code files, not the project files. If you haven't changed any project files since you last built the Dockerfile, you get to take advantage of Docker's layer caching and skip the restoration of NuGet packages for those projects, which can be quite time consuming.
Yes, there is a small amount of extra space being included in the container image because the project files end up getting copied twice which results in duplicate data amongst the two layers. But that's fairly small because project files aren't that big.
There are always tradeoffs. In this case, the desire is to have a fast inner loop build time at the expense of maintenance and a small amount of extra space in the container image.

Multiple Dockerfiles in project with different contexts

I have a repository where I need to create several Dockerfiles, but each of them should have a different context.
I like the solution posted here, but it doesn't fully fit with my use case.
NO, THIS IS NOT A DUPLICATE. IT'S A DIFFERENT USE CASE. PLEASE KEEP READING.
I know it's better to exclude from the context unnecessary folders, especially if they are big. Well, my project consists of several folders, some of them are really huge.
For simplicity, suppose this is the file tree of my project:
hugeFolder1/
hugeFolder2/
littleFolder1/
littleFolder2/
And suppose that I need to create two Dockerfiles (following the solution that I previously mentioned):
docker/A/Dockerfile <- let's call this Dockerfile "A"
docker/B/Dockerfile <- let's call this Dockerfile "B"
docker-compose.yml
Now the point is:
A only needs hugeFolder1 and both the little folders.
B only needs hugeFolder2 and both the little folders.
So I would like to exclude the unneeded huge folders respectively.
What is the best way to achieve this?
Edit: Previous answer was adding folders to the image that were outside the build context, which won't work. Additionally OP clarified the contents and how the image will be used in the comments, showing a very good use case for multi stage builds.
I'll take a stab at it, based on the info provided.
Firstly, you can't exclude folders from a given docker context. If you use docker build -t bluescores/myimage /some/path, your context is /some/path/**/*. There's no excluding the huge folders, or the little folders, or anything in them.
Second, in order to use ADD or COPY to bring files into your docker image, they must exist in the build context.
That said, it sounds like you'll end up using various combinations of the huge and little folders. I think you'll be better off sticking with your existing strategy you've outlined, with some optimizations - namely using multi stage builds.
Skip docker-compose for now
The solution here that you reference isn't really aiming to solve the problem of closely controlling context. It's a good answer to a totally different question than what you're asking. You are just trying to build the images right now, and while docker-compose can handle that, it doesn't bring anything to the table you don't have with docker build. When you need to orchestrate these containers you're building, then docker-compose will be incredible.
If you aren't sure you need docker-compose, try doing this without it. You can always bring it back into the mix later.
Now my image is gigantic
See if multi-stage builds are something you can make use of.
This would essentially let you cherry pick the build output from the image you COPY the huge folders into, and put that output in a new, smaller, cleaner image.

TFS MSBuild Copy Files from Network Location Into Build Directory

We are using TFS to build our solutions. We have some help files that we don't include in our projects as we don't want to grant our document writer access to the source. These files are placed in a folder on our network.
When the build kicks off we want the process to grab the files from the network location and place them into a help folder that is part of source.
I have found an activity in the xaml for the build process called CopyDirectory. I think this may work but I'm not sure what values to place into the Destination and Source properties. After each successful build the build is copied out to a network location. We want to copy the files from one network location into the new build directory.
I may be approaching this the wrong way, but any help would be much appreciated.
Thanks.
First, you might want to consider your documentation author placing his documents in TFS. You can give him access to a separate folder or project without granting access to your source code. The advantages of this are:
Everything is in source control. Files dropped in a network folder are easily misplaced or corrupted, and you have no history of changes to them. The ideal for any project is that everything related to the project is captured in source control so you can lift out a complete historical version whenever one is needed.
You can map the documentation to a different local folder on your build server such that simply executing the "get" of the source code automatically copies the documentation exactly where it's needed.
The disadvantage is that you may need an extra CAL for him to be able to do this.
Another (more laborious) approach is to let him save to the network location, and have a developer check the new files into TFS periodically. If the docs aren't updated often this may be an acceptable compromise.
However, if you wish to copy the docs from the net during your build, you can use one of the MSBuild Copy commands (as you are already aware), or you can use Exec. The copy commands are more complicated to use because they often populated with filename lists that are generated from outputs of other build targets, and are usually used with solution-relative pathnames. But if you're happy with DOS commands (xcopy/robocopy), then you may find it much easier just to use Exec to run an xcopy/robocopy command. You can then "develop" and test the xcopy command outside the MSBuild environment and then just paste it into the MSBuild script with confidence that it will work - much easier than trialling copy settings as part of your full build process.
Exec is documented here. The example shows pretty well how to do what you want, but in your case you can probably just replace the Command attribute with the entire xcopy/robocopy command (or even the name of a batch file) you want to use, so you won't need to set up the ItemGroup etc.

Copy files to another folder during check in (TFS Preview)

I have the following scenario: The company edits aspx/xml/xslt files and copy manually to the servers in order to publish them. So, no build is done. For the sake of control we've decided to adopt TFS Preview since it tracks the version, who edited and so on. Needless to say, it works like a charm. :)
The problem is that since we are unable to build the apps we can't set a build definition to automate the copy of files to another place which, as I've stated before, is done manually.
My question is: Is it possible to copy the files to another place (a folder in a server or local) during the check in? If so, how? (remember, we don't build. so we can't customize the build process...)
You have two options.
1) Create a custom check in policy. I'm not familiar with this process enough to give you any pointers, but I believe it can be done.
2) Create a custom build template, and use that for your builds. You should be able to wipe the build template down to nothing, and then only add the copy operation to it. This is probably the route I would take. Get started here.
You mention you are using TFSPreview, which is hosted on the cloud so it won't be able to access any machines in your network unless you're prepared to open up your firewalls :).
You can copy source controlled files around the TFS Instance ([say into a Source Controlled Drop F1) and then check this out after the build completes.
Start by familiarising yourself with customising the TFS Build Process.
When you're up to speed, you need to look at adding a "Copy" Activity in the Workflow to move the files to the drop folder.

Resources