Visual Studio Solution and Package Versioning – Part 1

Previously, I wrote in my previous post “SemVer, Team Development and Visual Studio Solutions” some of my thoughts about solution and application versioning. It’s been nearly two months since I wrote it, and after having followed the strategy I outlined, I can now discuss my experience with it. My team and I have felt some of the pains associated to it. Just like when I wrote it I was looking for a solution to my problem at the time, I continue to search for fixes to other problems that have arisen.

I understand that that post is kind of TL;DR but it pretty much states what my problem was and still is, and a possible path. At this moment, I still think that what was presented is still valid and works. The problems that I have encountered are essentially about tooling and concurrency. NuGet can be a REAL pain in many ways, if you don’t understand how it works.

What I want to do now is go over the pain point’s I have felt and try to find solutions for them. Hopefully, this will help me get my head around what I really need, since so many ideas are going through my mind. Hopefully, by writing this, I can get my mind straightened out, and help others to do the same.

The “solution” I used…

The changes I applied, based on the previously mentioned post, was used across 3 VS solutions with 10, 60 and 90 projects each (to help understand the dimension of the problem). The most generic solution contained shared libraries and had the least number of projects; the others refer to different application components.

Constraints were applied to simplify some aspects of the process:

  • Any project that would have shareable library outputs were packaged into NuGet packages. Package generation was aided by OctoPack, applied through NuGet.
  • Two NuGet repositories were used: a local repository on the developer’s machine – a dev could at any moment create a package and consume it without affecting other devs efforts during development; a second feed is used for CI built and published packages. These are generally considered official.
  • NuGet configuration was contained in the solution, by use of nuget.config file in the .nuget solution folder. LocalRepo folder was a relative path \ sibling directory to the solution folder. This way, pulling from the repo would be enough to get dev machines to connect to the repos.
  • SemVer was used as .Net assemblies permit, using only the first 3 parts (major, minor and patch). Revision number was generally 0, except when we needed to regenerate a package. I seriously wanted to use SemVer because it makes so much sense, but there are problems with that, which I hope to highlight shortly.
  • To simplify package management and dependency control within a solution, ALL projects shared the same version number, stored in the project’s AssemblyVersion.cs file, and was updated using Zero29.
  • Because we used OctoPack, assembly and package version numbers were identical to the assembly versions and consistent throughout the solution.
  • EVERY packaged project has a nuspec file, to enforce file inclusion, dependency requirements, and metadata (except for version number), Dependencies can be VERY problematic and correct care must be taken to get this up to date consistently.
  • A “tools” solution folder was added to each solution, with a “BuildAndPack” folder. This folder contained a series of .bat files to simplify some steps, such as version increment tasks, nuspec file management, and inter solution package updates (which I’ll get to shortly).

Devs became responsible for controlling the versioning process. Any time a change was made in a solution, the dev would need to increment the version number, build and pack locally. With these local packages, he could update the downstream projects with the changes in the dependency packages and validate the integration. This would allow him to test his changes and effects in the dowstream project (which in many ways commands changes in upstream projects).

Still, we tried our best to simplify and automate each step, to minimize the number of decision and manual steps required. Also, there was a strong attempt to keep the solution self contained, in order for any member to get all the necessary tooling components and scripts on a pull from source control.


The solution in place isn’t tuned yet to what we as a team require. It’s a start, but needs many fixes and solutions to the many pain points that are still felt. I relate them mainly to tooling and change concurrency.

I mention tooling, because NuGet has a set of default behaviors that are somewhat problematic. For instance, consider:

  • a project A that consumes a package B and C.
  • B defines C as a dependency, and says in it’s nuspec to work with any version of C from 1.0 to 2.
  • project A currently consumes v1.0 of B and C.
  • A new version of B and C exists at v1.1.

If you update package B in project A, you would probably expect C to get updated. Unfortunately, it isn’t. A already has a version of C, and sticks with the lowest version available of C. You would end up with B at 1.1 and C at 1.0. Two ways to get C up to date are to update C explicitly or change B’s nuspec to demand C at 1.1 as a minimum. NuGet doesn’t support this out of the box and so I had to develop a tool to iterate the nuspecs in a solution and update minimum package versions, especially on dependencies within a solution, because all packages generated from a solution will have the same version.

Concurrency-wise, when 2 devs update the version of the assemblies within a solution, every AssemblyInfo.cs file is changed on the same lines, and concurrent changes generate merge conflicts in source control. If changes are infrequent, it is more or less quick and easy; if they are frequent, devs get stuck in merge resolution. Same thing happens to .csproj files, since the path to the package contains the version number.

Series of Problems and Solutions

I want to tackle these problems in a series of posts, finding workarounds or solutions for them. As they get written I’ll add the links to the list:

  • Version increments and minimizing merge conflicts
  • Keeping intra-solution package dependencies aligned
  • Keeping inter-solution package dependencies aligned
  • Keeping things aligned in a NuGet package update

I’ll also try to keep the tool set used in a GitHub repo @ and eventually as NuGet packages (once I figure out how the public feed works).


SemVer, Team Development and Visual Studio Solutions

Controlling dependency chains between libraries and services in complex software projects isn’t easy, though you can, to a certain point, get away with very simple solutions. I guess most teams end up applying some sort of versioning strategy to the software they develop, even if it’s saying that it’s a 1.0 at deployment time and stick with that. Like I said, you can get away with a lot, depending on your context. A lot of the projects I have worked in my career where single client and single installation applications. That pretty much meant the version of the software installed was whatever was built based on some source control revision. Match the deployed package to whatever revision it was created with and you had your version. This generally meant a single version across components that integrated the software.

As simple as this method is, it works and takes a load off in terms of the amount of things a small team has to focus on to release software. Still, not all projects can live with such a simple solution. I’d say that you can live with this solution as long as you can contain the whole installation as a single package or build result. Once you start mixing and matching and thinking about plugable architectures, or have large package dependency graphs with more than a couple nodes, things can get complicated…. and break easy due to incompatibility.

Continue reading

Wiring up an event-message handling service

Recently, in a project I’m working on, I needed to create a service that would allow me to monitor what was going on in the application. In this case, log file info wasn’t rich enough for the type of analysis required. We wanted a contextualized view of what was going on at any moment within this application, but also to measure key aspects of its use. In a sense, it would be an implementation of some of the control patterns referred to in Architecting Enterprise Solutions: Patterns for High-capability Internet-based Systems (Wiley Software Patterns Series)
, such as the System Monitor.

Lately I’ve been looking at and quite interested in CQRS. I like the concept and what it offers, especially when used with event sourcing. Unfortunately, I haven’t been able to apply it for many reasons, namely lack of experience with it and time to focus on learning how to do it, especially considering the need for features in a time constrained production application. For this particular case, though, I considered I could use some of the concepts to get to a solution that’ll do what I want.

Continue reading

Passwords “insonsas”

Um dos pontos fulcrais na segurança de um site é a gestão de passwords dos utilizadores.Sempre que nós como programadores implementamos um sistema de autenticação que recorre a nome de utilizador e password, temos de ter cuidados especiais. Vulnerabilidades que desconhecemos nos nossos sistemas podem comprometer a base de dados e tornar o seu conteúdo acessível.

Se considerar-mos os hábitos de utilizadores, nomeadamente a reutilização frequente de palavras passe, a descoberta de uma palavra passe num local poderá facilitar o acesso a dados do utilizador noutros locais e sistemas. Agentes maliciosos utilizam ataques para obter este tipo de informação devido ao seu valor. Ainda recentemente houve o ataque ao Linkedin, e a publicação da informação de passwords dos utilizadores. É portanto necessário tomar medidas, não só para preservar a segurança dos dados, mas garantir que o acesso a elas não crie novas vulnerabilidades.

Continue reading

Beginner’s Guide to PostgreSQL

Acabei de publicar um novo curso de iniciação às bases de dados relacionais em PostgreSQL no Udemy, um site de formação on-line. O curso está dividido em 3 módulos – introdução, criação e manipulação das estruturas de dados, e SQL. Actualmente, tem 18 lições em vídeo disponíveis, e muitas em produção.

O curso está em inglês, e espero que venha a ser bastante útil a muita gente de modo a entrar no mundo desta excelente base de dados.

I just finished publishing a new beginner’s course on PostgreSQL at Udemy, an online platform for learning. The course has 3 modules – an introduction to relational databases, data structures and manipulation finally SQL. Currently, the course has 18 vídeo lessons and the missing ones are currently in production.

The course is in english, and I hope it becomes very usefull to anyone wanting to enter the world of this amazing database system.

Conferência SerFreelancer

Dia 2 de Abril vou ter o prazer de participar como orador na “Conferência – Ser Freelancer em Portugal”, no IPJ de Aveiro. vai ser certamente um dia muito interessante em que vamos reunir freelancers de diversas áreas, especialmente quem está a começar, para falar dos vários aspectos inerentes ao trabalho como freelancer.

Vou estar na conferência a representar área da programação web e desenvolvimento de software. Tem piada que vem numa altura em que preparo-me para dar o salto do freelancing para a estrutura de empresa. De qualquer forma, não deixarei de partilhar os diversos aspectos da minha experiência como tal. Há sempre muito para dizer, e muitas questões a responder. Até lá, tenho que recuperar a voz!

Espero ver muito pessoal por lá. Se houver tópicos que alguem gostaria que abordasse em especial, deixa um comment!

Mais info em

FATAL: could not reattach to shared memory (key=…, addr=…): 487

FATAL: could not reattach to shared memory (key=276, addr=01F20000): 487

Este é um erro que aparenta aparecer em algumas instalações PostgreSQL em Windows, em especial depois de existir algum update do OS. Uma instalação minha teve este problema, com o erro a ser registado periodicamente nos logs do PostgreSQL. Infelizmente o erro é problemático para aplicações dependentes da base de dados, uma vez que ele interrompe conecções entre a aplicação e a base de dados. É causador de um tipico erro de .NET com mensagem pouco esclarecedora:

[SocketException (0x2746): An existing connection was forcibly closed by the remote host]
System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) +232

Poderia ser de uma coneção externa, mas o pilha de chamadas tinha claramente a biblioteca NpgSQL – o provider PostgreSQL para .NET na lista. Não era uma conecção entre cliente e servidor, mas sim entre a aplicação e o serviço de base de dados (que é efectuado sobre TCP, mesmo estando na mesma máquina).

A solução é a actualização da instalação do serviço de base de dados – este caso passei do 8.4.0 para o 8.4.6. (a correcção deve ter sido introduzido no 8.4.1). Esta versão terá o patch para a correcção deste erro. A actualização é simples: para o serviço e correr o instalador mantendo os dados e configurações intactas (poderá ser necessário recolocar o serviço a arrancar pelo sistema utilizador LOCAL SYSTEM, se necessário).

Info sobre o erro ou patch:

Recuperação de arranque em Windows Server 2008

Pois. Este é um post que representa o meu dia de ano novo. A passagem foi excelente. O tarde após a passagem foi bastante assustador. Aproveitei a “paragem” para fazer uma limpeza ao meu servidor e introduzir uns discos novos. E assim se fez o caos! Passo a explicar…

O meu servidor é relativamente simples e construído às necessidades. É um setup simples com dois discos de ITB em RAID 1 para sistema e repositórios de ficheiros importantes, e mais 3 discos de 640GB para “tralha”. Decidi fazer um upgrade, e acrescentar algum espaço – Ia juntar um disco de 640GB que estava na prateleira parada com uma das existentes num RAID 1 dedicado a backups, e acrescentar um disco de 1TB para mais tralha. Assim, ficaria com um total de 7 discos, representando cerca de 3TB de espaço, estando metade montado em RAID1. Uma unidade separada e dedicada a backups parece-me uma boa aposta, e quero implementar uma forma mais automatizada de os fazer, quer do servidor, da minha máquina e a dos meus colaboradores.

Portanto abri e limpei algum (pouco, felizmente) pó do servidor, e reorganizei a barra de discos que agora ficou completamente cheia. Introduzi o disco antigo que esteve parado (e que ainda tinha uma instalação de OS anterior do servidor, como backup), arranquei a máquina, criei um novo RAID e… O servidor entrou no OS antigo em vez da instalação actual. Arrancou pelo segundo disco em vez do primeiro. E assim o caos começou.

Continue reading