Battling TFS Builds and Powershell

My last few days have been about getting DNX builds to work in TFS. I’ve got a couple of projects / solutions of different types ongoing, between web apps, web APIs and class libraries. I consider a good build system as a fundamental component in the development process, to aid in continuous integration and enable continuous deployment.

TFS vNext builds in TFS 2015 present a new setup, and as with any new tech problems arise, and solutions aren’t always easy to find. Existing tasks don’t always cater to my needs, so I’ve been creating a bunch of simple Powershell scripts to fill in the gaps. And since I’m an absolute Powershell noob, well, lots of problems appeared.

So, what follows are some problem-solution scenarios I found along the way.

TFS build + Powershell : Persist vars between steps

The Context:
I’ve opted to give GitVersion a try. One of my solutions is composed of class libraries that are reusable in multiple other solutions (base classes, interfaces, abstractions). My distribution mechanism of choice is an internal nuget server. As such versioning the packages is key. I prefer to follow SemVer since it explicits what changes occur, and also garantees an incremental version number on each change.

Previously (pre-dnx), I’ve built a bunch of command line tools for my team to use to control the version numbers. This required devs to run a command pre-commit to increment the number in the AssemblyInfo.cs files (or in our case a single shared file). Increments were driven by the dev, who would choose to increment the major, minor or patch component. .Net assemblies include a fourth component – revision, which we would allow to increment if no changes were made but we simply wanted a new build / package (because something went wrong). NuGet updates in projects / solutions require incremental numbers so this was useful to us.

Though not difficult, the manual increment step is still a manual step. GitVersion automates this, guaranteeing at least a patch increment on each new build. Eventually, git commit message can drive minor and major increments (using “feature” and “breaking” in the commit message). Alternatively, the GitVersion.yaml file has a next-version property that can be used for that too.

The problem:
In my build, I have a step (before dnu restore and build) to get the new version number and add it to an environment variable. The idea would be to persist the variable’s value throughout the various build steps (and not across builds or permanently). Unfortunately, using an $env:varname type of attribution in the Powershell script does not store the variables. Calling Get-Variables in the next steps does not present the variable in the list of available variables. This may be possible in the future, and should be possible in VSO (using specific variables), but not yet in TFS 2015. I open an issue that has a short discussing associated: https://github.com/Microsoft/vso-agent-tasks/issues/375 .

The workaround:
To solve this, I took the “easy” but somewhat inelegant route of serializing the object that GitVersion creates to file, and reading / de-serializing the file contents to object wherever I need it. For this too work, I had to put the GitVersion.yaml next to the .git folder (otherwise GitVersion.exe would not produce the correct output). Since the PS scripts are run from the solution folder, I serialized the object to a gitVersionOutput.json file, which shouldn’t interfere with anything else (and will only exist on the build server).

$GitVersionOutput | Out-File gitVersionOutput.json

Once I’ve gotten the new version number for the project / solution, I can run through each project.json file in the solution and substitute the version property’s value.

Powershell : ConvertTo-Json might not produce the result you want

The Context:
As mentioned in the previous step, once I’ve got the new SemVer version number, I can substitute the value in each of the project.json files in the solution. This will allow the build to use that value for each of the generated packages. Package and assembly version numbers will automaticaly be incremented, NuGet packages will be correctly versioned and the Git repo can then be tagged. Binaries, packages and repo will be synced to match specific point-in-tine inputs to outputs.

The problem:
What would seem to be a simple step /powershell script turn out to be a load of work. The idea was to read each project.json with a combination of the Get-Content and the ConvertFrom-Json cmdlet, change the properties value, and rewrite the file with the ConvertTo-Json and Out-File cmdlets. Unfortunately it’s not straightforward. ConvertTo-Json won’t serialize correctly – deep trees will see some subtrees, like the dependencies subtrees converted to string instead of a tree.

In other words, running

Get-Content -Raw project.json | ConvertFrom-Json | ConvertTo-Json | Out-File project.json

doesn’t produce the correct output.

What was read as (shortened for brevity to the relevant parts)


{
"frameworks": {
"dotnet": {
"dependencies": {
"System.ComponentModel": "4.0.0-beta-23109",
"System.Data.Common": "4.0.0-beta-23109",
"System.Runtime": "4.0.20-beta-23109",
"System.Runtime.InteropServices": "4.0.20-beta-23109",
"System.Runtime.Serialization.Primitives": "4.0.10-beta-23109"
}
}
}
}

Gets written as


{
"frameworks": {
"dotnet": {
"dependencies": "@{System.ComponentModel=4.0.0-beta-23109; System.Data.Common=4.0.0-beta-23109; System.Runtime=4.0.20-beta-23109; System.Runtime.InteropServices=4.0.20-beta-23109; System.Runtime.Serialization.Primitives=4.0.10-beta-23109}"
}
}
}

Notice that dependencies is no longer an object with properties but a single string.

The solution:
Two things should be considered. To read the json, Get-Content should have the -Raw switch to avoid using any other piped step (such as Out-String); Get-Content -Raw project.json | ConvertFrom-Json works correctly.
The other thing, when serializing, is to include the --depth argument before writing the file : ConvertTo-Json --depth 999 | Out-File project.json , to avoid the deeper parts of the tree to be written as a single string value.

Deployment – IIS might be looking for x64 apps because of the app pool

The context:
This one drove a colleague of mine, who has never used IIS, mad. The context was a simple one. Deploying (x-copy) an app to a IIS based server. Our build doesn’t have this step setup yet (but hopefully it will, soon, with OctopusDeploy), and it’s also something kinda one-shot-ish since we were trying to deploy a branch’s snapshot with some mocks for validation. Esetially we had an X-copy to the server, and reference to an x86 version of a dnx beta.

The problem:
After small changes to IIS (basic stuff), one last error was keeping the the site / app from displaying. IIS was looking for dnx-clr-win-x64.1.0.0-beta5, and not the x86 variant. We did not have, anywhere, a ref to x64.

The solution:
This was solved with an AppDomain setting. In the used app Domain, there is a setting that allows 32bit apps to run. Changing the flag value to true corrected the problem. Unfortunately, this one was pretty hard to find. To Get there:

  1. In IIS choose the appdomain being
  2. Choose used and Advanced settings
  3. Change “Activate 32 bit applications” in the General properties group to true

Powershell – Hidden vars don’t go out to the command line

The context:
As previously mentioned, GitVersion is being used to calculate a new SemVer version of the artifacts being generated. The last step, after building, testing and pushing packages is to tag the repo, on the commit that generated the version, with the version number. This is generally done using a git push

The problem:
The origin argument in the command is a URL to the repo, and authentication info is necessary. In the case of TFS repos, a username/password combination is required. git push at the command line generally requests the user and password values in separate interactions in the shell. To avoid these interactions (since in the build it’s not possible or convenient), a single command should be used with all the required info. in this case, the url for origin can contain the user and pass before the url using the following format:

git push https://:@

Passwords are sensitive things and we don’t want them coded in the script. We can use build variables (defined in the variables tab of the build job). TFS supports “hidden” variables that allow you to include sensitive info, and hide them from the UI and logs. These vars are generally available as environment variables throughout the build. While you can access normal variables through ${env:variableName} references, hidden vars won’t get substituted correctly in the script, failing the authentication.

The Solution:
Build variables can generally be referenced through the ${env:varname} syntax, but hidden vars can’t. They can, though, be passed as arguments to a script. In this case their value is used internally as expected and still hidden (substituted by ****) in the logs. Script arguments are declared in the start of the script and defined in the arguments textbox in the step definition. in this case, the reference is in the form $(varName).

So for my case, I have 3 vars I define in the build – tagUser, tagPass, tagRepoUrl – where tagPass is hidden. At the very beginning of my powershell script I accept 3 arguments (by ordinal reference) which can be locally accessed as $tagUser, $tagPass, and $tagRepoUrl


[string]$tagUser = $args[0]
[string]$tagPass = $args[1]
[string]$tagRepoUrl = $args[2]

 

In the arguments textbox I reference the build vars in order: $(tagUser) $(tagPass) $(tagRepoUrl)

args

 

There’s also a GitHub issue for this one: https://github.com/Microsoft/vso-agent-tasks/issues/388

Extra:
I probably can get the repo url through some other mechanism (maybe gitVersion or .git folder files or something) which I may explore and get rid of the tagRepoUrl variable all together. Seems redundant, just haven’t invested the time for that.

Build vNext projects in TFS (on-premise)

Since I’ve been working with vNExt these last few weeks at Celfinet, and having started a brand new solution for some services, one of my main concerns was to get a build setup ASAP. The lesser the amount of dependencies and projects and what not, the easier and faster it is to setup the build. Faster here is mainly for code sync (from source control, in this case, GIT) and the build feedback. The less amount of work that needs to be done, the faster it will be to se the whole job pass or fail, which in the setup phase can be daunting if you have to wait long.

We generally use one of two build systems – TFS or Jenkins. The new build system (also known as Build.VNext or Build.Preview in the menu) was recently made available to the team in our installation. I’m not a fan of the previous build system. The Xaml workflow concept for defining jobs isn’t a bad idea, but it just isn’t a great experience and is really hard to get into and use. I do like Jenkins flexibility – you build is basically just a sequence of steps of anything – but the UI and navigation could really use some work. When I saw the presentations for the new TFS system, I was hooked (even without trying!). The interface is super clean and has all the flexibility that we got from Jenkins. There is still plenty that can be done to make the new TFS build system spectacular, but it clearly is going in a great direction.

Sidenote: I’m quite curious as to why the ASP.Net team hasn’t adopted the TFS / VSO build system. Preference seems to have gone to Travis/ AppVeyor based builds (as presented on GitHub project pages).

Anyway, not everything was a smooth setup in the build system for my projects. Not that it isn’t easy, but it’s still a preview and documentation is scarce, so I naturally hit some obstacles.

1 – DNVM, DNU updates (prerequisite)

First thing to do in the build, and because we’ll need to build using dnu, is to install/update DNVM and install / upgrade DNU to whatever our solution requires. The process is an automated version of my previous post. The powershell script used is similar to the one at https://msdn.microsoft.com/Library/vs/alm/Build/azure/deploy-aspnet5#Createthedefinition. It :

  • updates / installs dnvm
  • updates dnu to whatever is in the solution’s global.json, and sets the appropriate environment var to use it.
  • restores packages for the solution’s projects.

The only difference is referenced script only restored dependencies for projects within the \src folder. My modified version restores packages for all the projects (or, better yet, all the project.json) in the solution.

Therefore, the following powershell script was added to the solution folder (and source controlled) and added as a first step in the build process.

 

2 – MSBuild it

Second step was a regular MSBuild step, using the MSBuild task and pointing to the solution file. It’s more than enough and will correctly capture all the projects and dependency graph within it.  

3 – Run xUnit tests

This was a painfull one for me. I spent a lot of time trying to use the VSTest task available, with no luck. It would run, sometimes with success (depending on whatever settings I used – I tried so many it’s now hard to tell). Donovan Brown had already written up a post on this which I tried, but it just wouldn’t work for me – the path to the adapters the he mentions wouldn’t work in my installation. So after a lot of struggle I tried inventing a powershell script to recursively find all the test projects under the solution’s \test folder and run dnx . test on all of them. This off course assumes that all the test projects are under \test and that all of them have a test command in the project.json. It’s a bit forced, but for a first effort should be ok. refactoring later on when more info is available will be required! So after the MSbuild step I added another powershell step to run the xUnit tests:

Like the previous one, the script was placed in source control in the solution directory.

Next steps

Build and test was the most important parte of what I wanted in this first approach to the build. Next step will be generating the appropriate artifactes (nuget packages) and deploying to an environment with OctopusDeploy. Hopefully it’ll be pretty straightfoward 😀

Upgrading vNext MVC6 Apps From Beta4 to Beta5

It’s been a while, but my last few sprints have been focused on working with ASP.NET 5 MVC6, namely evaluating what is going on in order to protoype an app front-end. Working with Beta versions is definitely challenging – there are plenty of moving parts and a lack of documentation. Fortunately, the term “community” is one we can apply more to what is going on with Microsoft’s iniciative. I’ve been pretty much googling stuff, deducing concepts, and hogging the Jabber #AspNetvNext chat room, where, fortunately, I’m getting answers I need.

I started developing the prototype in Beta4, and decided to migrate to Beta5 before starting to add even more parts to the project. Unfortunately, upgrading isn’t the easiest thing in the world, but it is possible.

1. Upgrade your DNVM and DNU

Your runtime environment should be one that can support Beta5 correctly. At this time, the latest is Beta6-10383. To upgrade, you can run in the command line window:

@powershell -NoProfile -ExecutionPolicy unrestricted -Command "&{$Branch='dev';iex ((new-object net.webclient).DownloadString('https://raw.githubusercontent.com/aspnet/Home/dev/dnvminstall.ps1'))}"

This will update dnvm itself, if you are still in Beta4. As of the beta5 or 6 version, a self-update command is available.

With dnvm updated, you should upgrade the dnu versions.

dnvm upgrade

should pull and install the most recent stable dnu. You can indicate a specific version also:

dnvm install 1.0.0-beta5-11904

or

dnvm install 1.0.0-beta5-11904 -r coreclr

if you want to choose/specify the clr type. You can also pull the most recente beta build available using

dnvm upgrade -Unstable

2- Update your global.json

In your solution’s global.json, update the sdk version value:

"sdk": {
"version": "1.0.0-beta5-11904"
}

3- Update packages in the projects

At this time, you can update package refs in your projects to Beta5 versions. One thing to consider, though, is that from Beta4 to Beta5, a lot of changes have occurred, especially in the namespaces for some class. For instance, pretty much every package set now has an *.Abstractions package with base classes and interfaces, separated from their implementations. I would at this point recommend excluding as many projects as it would make sense to and to update gradually, traversing the dependency chain. This is simply to reduce the amount of focus required to analyse output errors.

The main NuGet feed contains a large set of packages, but if you want to try using the latest and greatest, you can pull from the myGet feed. In order to do so, at the same level as your global.json and .sln file, add a .nuget/ folder and a NuGet.config file (similar to previous solution types). Add the following :

<?xml version="1.0" encoding="utf-8"?>
<configuration>
<packageSources>
<!-- Add this repository to the list of available repositories -->
<add key="AspNet MyGet" value="https://www.myget.org/F/aspnetvnext/" />
<add key="nuget.org" value="https://www.nuget.org/api/v2/" />
</packageSources>
</configuration>

I prefer having this in the solution, and committed to source control, as the entire team can share this definition in the solution context AND you won’t have to manually configure VS to use the repository (which would make it available to every solution you use, which you may or may not want).

3.1 Abstractions

Alot of things are changing, so some car is required, and code will require updates. The most common change that affected me were namespace changes (as mentioned in https://github.com/aspnet/Announcements/issues/14). Pretty much all of the repos have some Abstractions namespace associated to it, now. That’s actually a pretty good thing , in my opinion. If you are not tied to a specific implementation of abstract class or interface, the only reference you are required to have in class libs is the abstractions package. That makes use of DI more straight foward in the application project, which would be where the actual implementation is referenced. Other than there, you can always program against the abstraction and Mock as you wish in tests.

Some common abstraction refs I required where:

 

Microsoft.Framework.Caching.Abstractions Microsoft.Framework.Caching IExperationTrigger
Microsoft.Framework.FileProviders.Abstractions Microsoft.Framework.FileProviders IFileInfo, IDirectoryContents
Microsoft.Framework.DependencyInjection.Abstractions Microsoft.Framework.DependencyInjection IServiceCollection
Microsoft.Framework.Runtime.Abstractions Microsoft.Framework.Runtime ILibraryManager
Microsoft.AspNet.Http.Abstractions Microsoft.AspNet.Http IApplicationBuilder

3.2 – Configuration

Configuration classes have changes also (https://github.com/aspnet/Announcements/issues/25 , https://github.com/aspnet/Announcements/issues/26). There is a preference towards using ConfigurationModel classes / packages, to tie into the OptionsModel, instead of the previous Configuration (though configuration builder is in Configuration). Configuration can be loaded into IOption objects in DI by using the following code:

var applicationEnvironment = services.BuildServiceProvider().GetRequiredService<IApplicationEnvironment>();
var configurationPath = Path.Combine(applicationEnvironment.ApplicationBasePath, "config.json");

var configBuilder = new ConfigurationBuilder()
.AddJsonFile(configurationPath)
.AddEnvironmentVariables();
var configuration = configBuilder.Build();
services.Configure<AppSettings>(configuration.GetConfigurationSection("AppSettings"));

The path to the config file should now be an absolute path as mentioned in https://github.com/aspnet/Announcements/issues/13

3.3 – _GlobalImport.cshtml

If you use _GlobalImport.cshtml for your views, the file name convention has changed to search for _ViewImports.cshtml (plural) as mentioned in https://github.com/aspnet/Announcements/issues/27 . That file should therefore be renamed.

3.4 – Configure()

For some unexplained reason, I was having trouble using Use* extension methods on IApplicationBuilder such as UseStaticFiles and UseCookieAuthentication. For some odd reason, the runtime shouldn’t load the UseMiddleware<>() method those extensions use.

System.MissingMethodException
Método não encontrado: 'Microsoft.AspNet.Http.Authentication.SignInContext Microsoft.AspNet.Authentication.AuthenticationHandler.get_SignInContext()'.

errorUseMiddleware

 

For some reason some packages were not playing nice. The workarounds I used:

For UseStaticFiles(), I just used what the extension actually does:

//app.UseStaticFiles();
app.UseMiddleware<StaticFileMiddleware>(new StaticFileOptions());

For UseCookieAuthentication(), using Beta6 packages for Microsoft.AspNet.Authentication and Microsoft.AspNet.Authentication.Cookies solved the problem.

SemVer, Team Development and Visual Studio Solutions

Controlling dependency chains between libraries and services in complex software projects isn’t easy, though you can, to a certain point, get away with very simple solutions. I guess most teams end up applying some sort of versioning strategy to the software they develop, even if it’s saying that it’s a 1.0 at deployment time and stick with that. Like I said, you can get away with a lot, depending on your context. A lot of the projects I have worked in my career where single client and single installation applications. That pretty much meant the version of the software installed was whatever was built based on some source control revision. Match the deployed package to whatever revision it was created with and you had your version. This generally meant a single version across components that integrated the software.

As simple as this method is, it works and takes a load off in terms of the amount of things a small team has to focus on to release software. Still, not all projects can live with such a simple solution. I’d say that you can live with this solution as long as you can contain the whole installation as a single package or build result. Once you start mixing and matching and thinking about plugable architectures, or have large package dependency graphs with more than a couple nodes, things can get complicated…. and break easy due to incompatibility.

Continue reading

Wiring up an event-message handling service

Recently, in a project I’m working on, I needed to create a service that would allow me to monitor what was going on in the application. In this case, log file info wasn’t rich enough for the type of analysis required. We wanted a contextualized view of what was going on at any moment within this application, but also to measure key aspects of its use. In a sense, it would be an implementation of some of the control patterns referred to in Architecting Enterprise Solutions: Patterns for High-capability Internet-based Systems (Wiley Software Patterns Series)
, such as the System Monitor.

Lately I’ve been looking at and quite interested in CQRS. I like the concept and what it offers, especially when used with event sourcing. Unfortunately, I haven’t been able to apply it for many reasons, namely lack of experience with it and time to focus on learning how to do it, especially considering the need for features in a time constrained production application. For this particular case, though, I considered I could use some of the concepts to get to a solution that’ll do what I want.

Continue reading

Passwords “insonsas”

Um dos pontos fulcrais na segurança de um site é a gestão de passwords dos utilizadores.Sempre que nós como programadores implementamos um sistema de autenticação que recorre a nome de utilizador e password, temos de ter cuidados especiais. Vulnerabilidades que desconhecemos nos nossos sistemas podem comprometer a base de dados e tornar o seu conteúdo acessível.

Se considerar-mos os hábitos de utilizadores, nomeadamente a reutilização frequente de palavras passe, a descoberta de uma palavra passe num local poderá facilitar o acesso a dados do utilizador noutros locais e sistemas. Agentes maliciosos utilizam ataques para obter este tipo de informação devido ao seu valor. Ainda recentemente houve o ataque ao Linkedin, e a publicação da informação de passwords dos utilizadores. É portanto necessário tomar medidas, não só para preservar a segurança dos dados, mas garantir que o acesso a elas não crie novas vulnerabilidades.

Continue reading