Team Foundation Server 2010 for Developers - part 7 - Configuring a Build

by Dmitry Kirsanov 10. March 2012 12:00

Team Foundation Server 2010 logoMicrosoft Visual Studio 2010 Team Foundation Server has two most important features – the source control and the build automation. Although other features are very important as well, these two are pretty much enough to consider the purchase of the Team Foundation Server.

Today we are going to talk about the Build Management system of Visual Studio 2010 Team Foundation Server.

It builds on my machine!

The build management system in Team Foundation Server 2010 is aimed to solve two major problems of software development teams. The first problem can be described by the very popular argument – “it builds on my machine!”. Only.

Why does it happen? See, software developers are very different people. Visual Studio developers can only do something useful, when they are local administrators on their machines, and that’s objective requirement for many things they have to do as developers, and this is only the tip of the iceberg. They are also different by the range of tools they are using, and for sure they do not replicate each other when we talk about .NET updates and version of Windows SDK.

I’ve seen situations, when developers supplied binaries, which were built using the “debug” configuration, i.e. they were emitting the debug information and had no code optimization during the compilation, meaning that performance suffered greatly. In all cases it was not intentional, just a human error or lack of knowledge.

To solve that problem, a rule should be enforced that all binaries that are supplied to 3rd party, including Quality Assurance specialists, should not be built on developer’s machine. Instead, they should be built using the centralized controlled “clean” environment.

 

The Food Chain

One of the most popular misconceptions I hear from those who are eager to implement Team Foundation Server in their environment, is that there is a “build server” which does all builds. There is such concept, indeed, in some other environments, but it’s not the case for Team Foundation Server 2010.

Overall, the reality is stranger than dream here.

That’s how it looks when we’re talking about the building process.

  • You have a TFS server.
  • There is one or more Project Collection in your server. Each Project Collection has one or many projects. More often than not, there is more than one project. If you have all software titles in one TFS project, you’re doing it wrong. It’s up to you how you separate your projects by collections, and whether you do it at all. In most cases having more than one project collection is an overkill, but again – that’s up to you.
  • Each Project Collection has one or many Build Controllers associated with it. Build Controller is a part of Team Foundation Server, a lightweight component which manages Build Agents.
    A Build Controller can be associated with a single Project Collection. Each project within that collection can use that Build Controller, but you can’t use that Build Controller with any other Project Collection, of this TFS server or another.
  • Only one Build Controller can be installed per machine – either physical or virtual.
  • The Build Agent is a separate (free) software which takes orders from Build Controller and actually builds the project when needed. There can be any amount of Build Agents associated with a single Build Controller. Any single Build Agent can only be associated with one Build Controller. You can have multiple Build Agents on a single machine.

That’s the way it is. There is no “build server”, but there are multiple build agents acting as such, and the reason why there are many of them is not because you have a long queue of projects to be built. In my experience, even if you have around 30 developers working on 10 projects, you still have no queue, assuming that you have default of 2 build agents.

The reason why you will end up with tens of build agents is more prosaic – in order to run automated testing, the software must be built on the same machine. Be it virtual, or physical, it will have to have the build agent installed (most likely together with Test Agent and Lab Agent) in order to run unit and integration tests.

There are situations, when you have functions which will only work on a particular operating system. For example, some functions will require Windows Aero, and it doesn’t exist on Windows XP. The same is with Microsoft Internet Explorer, since the maximum you can get on Windows XP is MSIE 8.

So, in order to test such functions, you will need to create a build definition and specify the agent based on required operating system. Moreover, you will need to create the Test List which will only include the tests suitable for that operating system. It’s important to note, that Test Lists are deprecated in Visual Studio 11. It’s too early to say what they will be replaced with, but the fact is – you can only use Test Lists in Visual Studio 11, if they were created in previous edition of Visual Studio, but you can’t create new ones.

Unit and Integration Tests

You won’t see the term “integration test” in Visual Studio. All tests are “tests”, no matter of the type.  From the theory of Software Quality Assurance (QA) we know that there are at least two major types of tests – the unit tests and integration tests.

The unit test is testing the least possible piece of code. A function, usually. Unit test consists of one or many functions inside of the test project, but the same is about any other test – be it integration, UI or whatever.

Unit tests require abstraction from the external sources. It means that you don’t connect to the external SQL server, mail servers or whatever else resources you might have. Instead you are building mocks, stubs or fakes.

Stubs are classes which replicate the required source but contain no logic. It returns hardcoded results, so all tests using stubs are the same.

Mocks are dynamic stubs – they replicate the real life objects and may contain logic inside. For example, if you have a function which generates the random password, in case of Stub it would always return the same password, but in case of Mock it would actually generate the password.

Fakes is a new concept in Visual Studio 11, the Microsoft Fakes isolation framework and is irrelevant to Visual Studio 2010 / Team Foundation Server 2010. However, it’s important to study the subject, as soon enough you will encounter the necessity to use the Fakes framework.

So, Microsoft Visual Studio 2010 Team Foundation Server allows you to have any tests as part of the automated build. You may have to have the prepared test environment to run them, but it’s a fun process, especially taking into account that your QA specialist’s job will become really creative and full of joy, instead of boredom and routine.

In a big healthy project we usually have hundreds of unit tests and at least tens of integration tests, which would consume hours of QA specialist’s time if done manually. Invest your time into automated testing and it will be among the most important investments you’ve done into your software development process.

TFS Build Automation Best Practices

That’s what I found to be the optimal practice when doing automated builds in Team Foundation Server:

  • Schedule nightly builds. These are ordinary builds that don’t need to include much testing and are only aimed to test whether the code builds. Which means – the code is not bogus.
    The nightly builds should not store the resulting binaries anywhere. The only thing you need from it is the log file and knowing that your code is “green”. Do not associate work items (bugs, change sets etc.) with nightly builds. I would consider the nightly build as sort of “smoke test” build.
  • The same is about Continuous Integration builds. Use it to make sure the code is green, just remove the excessive linking and footprint.
  • If the nightly build fails, and so the code goes “red”, instruct your developers to do not get the latest sources from TFS until it goes green again. The last thing you want is everyone having a copy of bad code.
  • Only release binaries that were produced by TFS. Create a build definition which will produce the code for release. Ensure it runs all the tests – unit tests, UI and whatever else. Ensure that the build definition contains correct retention policy and that your QA specialist gets the task to verify the release right after it’s done. Such task can be created as the result of build definition. Also, make sure the build is marked as failed if any unit tests fail, as well as employ automatic creation of MSI packages using WIX, if you’re using it.
    The release build definition should only be run manually by QA specialist, after he is totally sure that everyone checked in the code. Usually the release build is queued after some sort of “smoke test” build, like the nightly build, to not mess up with change set linking.
  • Make sure you have a well defined procedure of build promotion. A workflow which defines that your code should be created by specified build definition, it should be a prerequisite for further promotion. The first phase (“Quality”) could be “Ready for initial testing”, then your QA specialist must ensure that code passed the required tests, and then promote the build to “Initial tests passed”. It’s up to you what sort of qualities you will include into that workflow, but I would also recommend to include the “UAT passed” step, which includes acceptance of the product by stakeholder.
  • Do not allow all team members to create and especially edit build definitions. Otherwise ensure that all build definitions are approved before they are created.
  • Ensure all build definitions named by specific convention.
  • Make sure you have the procedure of creating the build machine. One of the worst scenarios is when someone else makes the build machine and doesn’t have a clue what Windows SDK stands for. The problem here is that it will affect the resulting product but is the last option to check, so will result in major waste of time.
  • Build definition description is not optional. It is a must, and it must contain the information you need, like what is the purpose of this build definition. When you have more than one person armed with right to create build definition, you soon enough find yourself in a situation of definition duplicates, just because people don’t bother to check if existing definitions were created for the same purpose.
  • Don’t mess up with security. Don’t give everyone full access to shared folder, don’t make everyone a TFS administrator, unless you know what you’re doing.

 

Here is the 1:30h video I made about Team Foundation Server 2010 build system:

Team Foundation Server 2010 for Developers - 7 - Configuring a build
blog comments powered by Disqus