A careful gotcha when referencing VB.NET code from C#

When referencing VB.NET code from C#.NET there is a subtle but very important difference between the two.

This was found when I was trying to access some legacy VB.NET code that used reflection to interact with two classes with nearly the same name. The VB.NET code worked perfectly fine but I was faced with a compiler error when trying to call the code from a new C#.NET project.

The error to watch out for is

CS0234 C# The type or namespace name does not exist in the namespace (are you missing an assembly reference?)

This comes down to the case sensitivity differences between VB.NET and C#.NET. In VB.NET the casing on class names is not important. So a class with the name VbCasingExample is the same as the class vbcasingexample. However in C#.NET these are two very different classes as identifier casing is important.

VbCasing.PNG

Compared to

CSharpCasing.PNG

So be careful when referencing VB.NET code from any case sensitive .NET language.

Set ConformanceLevel to Auto but it is already set to Auto!?

Quick post. I'm currently uplifting an old .NET 1.1 app to 4.5 and when trying to run it was getting the following exception.

System.InvalidOperationException was unhandled by user code
  HResult=-2146233079
  Message=Token Text in state Start would result in an invalid XML document. Make sure that the ConformanceLevel setting is set to ConformanceLevel.Fragment or ConformanceLevel.Auto if you want to write an XML fragment. 
  Source=System.Xml
  StackTrace:
       at System.Xml.XmlWellFormedWriter.AdvanceState(Token token)
       at System.Xml.XmlWellFormedWriter.WriteString(String text)
       at System.Xml.Xsl.Runtime.XmlQueryOutput.WriteString(String text, Boolean disableOutputEscaping)

Upon checking my code I verified that my XslCompiledTransform had the ConformanceLevel set to Auto. However I was still getting this error.

My online searches then suggested it had something to do with the XmlWriter. However there was no clear way to find the ConformanceLevel of the XmlWriter. After a bit of digging I discovered that when calling XmlWriter.Create one can pass in an XmlWriterSettings object with a ConformanceLevel property. I added this property and passed the object to the XmlWriter in Create. This solved the issue that I encountered.

The lesson from this is not only did the XslCompiltedTransform.OutputSettings ConformanceLevel had to be set to Auto but so did the XmlWriter.

XmlWriterSettings xmlWriterSettings = new XmlWriterSettings{ConformanceLevel = ConformanceLevel.Fragment};
XmlWriter resultWriter = XmlWriter.Create(memoryStream, xmlWriterSettings);
xslt.Transform(elementToTranform, resultWriter);

Nowhere was the specific change detailed so I think this might be useful for others. Please leave a comment if you find this useful.

Checking if supplied domain user credentials are correct with PowerShell

On a recent project we had the problem of creating multiple Windows Services to be run under a single account. So since we did not want to store the password in source control we had our script prompt us for the password. This worked really well until one day we put the wrong password in, and since Active Directory was set up to lock accounts after three bad tries we found we would instantly lock an account every time we put the password in wrong once.

So the obvious solution was to check once that the credentials you had were right before trying to do all this work and stupidly locking an account.

Of course someone had thankfully asked this question before. And thanks to JimB on ServerFault I basically used his entire answer as it did just what was needed. Original answer on ServerFault.

function Test-Login($serviceUsername, $password){
# http://serverfault.com/questions/276098/check-if-user-password-input-is-valid-in-powershell-script
# Get current domain using logged-on user's credentials
$CurrentDomain = "LDAP://" + ([ADSI]"").distinguishedName
$domain = New-Object System.DirectoryServices.DirectoryEntry($CurrentDomain, $serviceUsername, $password)

if ($domain.name -eq $null)
{
write-host "Authentication failed - please verify your username and password." -ForegroundColor Red -BackgroundColor Black
return $false;
}
else
{
write-host "Successfully authenticated with domain $serviceUsername" -ForegroundColor Green
return $true;
}
}
 

Remembering where you've been in Powershell with pushd and popd

The other day I discovered a long existing pair of commands in Powershell that allows one to navigate to a directory and then back to the previous one without having to manually maintain a stack of directories. The two commands are pushd and popd.

A quick bit of searching shows that these commands have existed in Unix shells for many years as well as Powershell since version 2. Wikipedia -- Pushd and popd

Where I have found this really useful recently is in deployment scripts where I need to change the current directory in the script but for usability I want to go back to where the script was first called from should any errors occur or even if the script finishes successfully. By using a try/catch/finally pattern this allows me to put the user back where they started with confidence whenever they execute the script.

try
{
pushd DIRECTORYPATH
# Logic goes here
}
catch
{
# Make sure any exceptions are bubbled up
throw $_
}
finally
{
popd
}

TechNet -- Push-Location
TechNet -- Pop-Location

Good comments instead of bad

One of the refrains that we all hear when the topic of code comments comes up is the refrain “My code is self-documenting.” On the surface this refrain makes sense, why write more than you have to. Unfortunately the way this is usually implemented results in the baby being thrown out with the bathwater. Leaving us in a worse off position than we were with too many comments.

I’ve never met someone who would argue that the code we create should be difficult to understand. That the how the code executes should be hidden or where the flow of control goes. Our code needs to be easy to understand so that as we maintain it in the future we do not have to rewrite entire classes just to add a bit of functionality.

So, what is a good code comment?

Comments should explain WHY” to paraphrase the colleague of mine who gave me this pointer.

Code Comments

How often have you come across some code that works but does something in a crazy way when a much more simple option is clear to you? Only when you implement your simple solution suddenly bugs appear and you finally understand why those lines existed.

An example of a superflorus comment can be found in the .NET Framework Reference Source of the String class. Range check everything. Yes, I can see that’s being done. You’ve obviously thought it important enough to point out but why? Why is it important to note that you are doing all range checks? A comment like this only results in more questions while answering none.

This sort of situation is where code comments become invaluable and will save you and your colleague’s hours in the future. Spend the time to explain why you made the design decisions you did. When you apply a workaround that is strange explain why you did this instead of the more ‘obvious’ solution.

A good example can also be found in the .NET Framework Reference Source. But this time in the DateTime class. That DateTime adjustment underneath the code is not clear at first glance why you would want to add double the day remainder in milliseconds when the time is negative. However when we read the comment above it explains why we would want to do such a thing and even uses a clear example to demonstrate the way.

Commit Comments

This concept of explaining why finds even more value when it comes to commit comments.

How often do you see an odd design decision but when you look into the history of that file the comment is “Added files.” A comment like that is less than worthless because not only does it tell you nothing new but you get irritated or angry and it sits with you as you try to fix whatever is wrong. Again, explain WHY you have made the changes you have. Explain why you chose a pattern or design over another. When writing these comments imagine that 12 months from now you will be coming back to this. When we can barely remember why we made choices a few weeks ago how can we expect to remember why we made choices 52 weeks later?

So to conclude. “Self-documenting code” should be applied to only the how something works. In no way can it show the why changes were made of designs chosen. When you come back months or years later the why is more valuable than any amount of how or what comments.

Mythical Man Month

This is such an awesome book I thought it useful to educate more people on its existence.

The iconic book by Fred Brooks covers many of the lessons he learnt during his time as a project manager of the IBM System 360 project. The book is separated into sections that cover a topic at a time and have suggestions for avoiding the issues he ran into. 40 years later we are still running into the same problems on projects today. A must read for any tech lead and highly recommended for any developer who thinks they will end up on a project where they are either the sole dev or one of a couple, basically any dev ever.

Using a script to set the Copy Local flag to false

As with my previous post I recently came across a repeatable task that we will probably want to repeat in the future so with my aim from The Pragmatic Programmer I decided to automate it.

The problem was that an architecual requirement of this project was to rely on DependencyInjection for all library references. To help enforce this every project outside of the DI one would require the Copy Local flag on all references set to false.

I started doing this manually but figured out it'd take a long time to go through all 40+ projects and this would happen in the future. So automation time it was.

A quick web search did not show that anyone had solved this problem before so I figured out I would have to learn some Powershell and make it myself.

As csproj files are simply XML I did some research to find out how easy it was to manipulate XML in Powershell. It turned out this is one of Powershell's strengths. However the first implementation had issues with namesapces so I had to use the Select-Xml command introduced in Powershell v2.

Building the XPath queries was fairly simple. The one hiccup to remember is that csproj xml has a default namespace of "http://schemas.microsoft.com/developer/msbuild/2003" so you need to remember to use that and the msb namespace prefix when making your XPath queries. To specify the namespace in Select-Xml you use the -namespace option.

Select-Xml -namespace @{msb = $projectNamespace} -xpath $privateXPath

The next step was saving out the changes. This proved to be an initial roadblock as all the files were set to readonly. As we are using TFS you have to explicitly checkout the files before you can edit them. This resulted in me looking into how to use the TFS command line executable "tf.exe". This proved to be fairly nice as I could simply pipe the collection of csproj files I wanted checked out to a chunk of script that would iterate through the collection and execute the checkout command on each file with the provided TFS credentials.

I explicitly did not attempt to check in the changes as I want the user to review the changes and make sure the solutions are still working. This is something you'd run once a month to make sure the requirement is still being followed.

The final hiccup was that the .NET XML classes Powershell uses has an issue with putting in a default empty namespace whenever you create a new element. This caused the project to fail to load in VisualStudio as the namespace was incorrect. The fix for this was pretty quick and easy. Take the file and replace any occurance of xmlns="" with an empty string. This is accomplished in Powershell with line

(Get-Content $projFilenameFull) | Foreach-Object {$_ -replace ' xmlns=""', ""} | Set-Content $projFilenameFull

So my first non-trivial powershell script was a fun and fiddly dive into scripting all my troubles away. So far so good. ;)

SetCopyLocalInAllCsProjFiles.ps1

Deleting all bin and obj folders from a solution

Quick little post.

Since reading The Pragmatic Programmer by Andrew Hunt and David Thomas I've been looking for ways to automate tasks whenever I find myself doing something I know I'm going to repeat later or I'm repeating right there and then.

The other day I was working on a VisualStudio Solution someone else had started and when trying to build it found they had checked in some of the bin and obj folders.

So I open up the root folder of the solution and prepare to trawl through about a dozen projects to delete all the bin and obj files. Obviously noticing that I'm about to do the same steps repeatedly and this will happen in the future I went and did a quick search to see if anyone else had already solved this.

Awesomely someone had.

So a huge thanks to Glenn at Development on a shoestring for providing exactly what I needed. I'm putting this here just in case his site should disappear and take the knowledge with it.

I threw the following into a powershell script that sits in source control ready for use in the future

# Iterate through all subdirectories and delete all bin and obj folders
# http://blog.slaven.net.au/2006/11/22/use-powershell-to-delete-all-bin-obj-folders/
# Had to use it for getting rid of a bunch of bin and obj folders in a PoC but thought it smart to put in here for other to use
Get-ChildItem .\ -include bin,obj -Recurse | foreach ($_) { remove-item $_.fullname -Force -Recurse }
RemoveAllBinAndObjFolders.ps1

Event Handlers only firing once in Microsoft Office AddIns

I've just been working on a project where we were to create some AddIns for several versions of Microsoft Office. Now I knew there was a lot of bad blood around Office AddIns but thought they were being overblown as I finished off the 2010 AddIn without so much as a hiccup. The 2007 and 2003 AddIns however showed why Office has the reputation it has.

The problem I ran into was that I had to have several event handlers to catch two events. The opening of a new inspector and a simple button click. So I did what you'd expect to do and registered them in the startup methods.

Initial testing went fine as I started up Outlook and triggered one event, made some changes, restarted it and then tested the other event. It took a while until I tried to test both events following one another at which point I found only one would trigger and then both would even handler hooks would be forgotten and wouldn't rehook in until a restart of the application.

public partial class ThisAddIn
{
    private void ThisAddIn_Startup(object sender, System.EventArgs e)
    {
        Outlook.Explorer explorer = this.Application.ActiveExplorer();
        Outlook.Application app = (Outlook.Application)explorer.Application;

        app.NewInspector += new InspectorsEvents_NewInspectorEventHandler(Inspectors_NewInspector);
    }
}

After much searching I began to come across implications that the garbage collector was removing the references after the first event. I was at a loss at what to do until I came across another discussion where someone was having a similar problem and the response was to save the object in a class level variable to avoid the garbage collector from removing it.

A quick edit and some testing showed this to work reliably. So, if Office is only triggering an event once make sure there object references are stored somewhere the garbage collector won't go. And make sure to assign the object before you register the handler or the garbage collector will still find it.


public partial class ThisAddIn
{
    public Inspectors _appInspectors;

    private void ThisAddIn_Startup(object sender, System.EventArgs e)
    {
        Outlook.Explorer explorer = this.Application.ActiveExplorer();
        Outlook.Application app = (Outlook.Application)explorer.Application;

        _appInspectors = app.Inspectors;
        _appInspectors.NewInspector += new InspectorsEvents_NewInspectorEventHandler(Inspectors_NewInspector);
    }
}

Playing with Regex on OSX

RegExhibit

If you've ever been stuck with the problem of trying to build anything but a simple regular expression you know how painful it can be getting it to match just what you want.

When developing for .NET on Windows I was introduced to a brilliant free tool called Rad Software Regex Designer that gave you the ability to provide an example of the text you wanted to match and an area to slowly build up your regular expression while getting instant feedback on what it was doing. It even has dialogs to add specific regular expression commands in case your proficiency with regular expressions isn't high or you just forgot how to create a non matching group. After moving to OSX for work I went looking for a similar tool for the Mac. And after a while I found it.

RegExhibit is a GUI tool of OSX that uses the Perl regular expression library to help you build regular expressions. This should be fine for any other languages that use a PCRE library but make sure you check before deploying. The core part of the program are two text areas. you place an example of the text you want to match into the lower area and build up your regular expression in the top are. There are even tabs for doing matches and splits but you'll likely find yourself in the match tab for most of time. However it doesn't offer the same built in dialogs like the Rad Software Regex Designer so make sure you've got a regular expression reference handy.

This is a great tool that has saved my sanity several times already and I do recommend to anyone that has to play with regular expressions and is developing on the Mac.

Adding Attachments with ActionMailer

Well recently I had the fun task of using Ruby on Rails' ActionMailer to create some automated emails to send out to users. At some point it was decided to attach the original email we received from the user to the notification email we were sending out to the user.

Now you would think that using the attachments method provided by ActionMailer would make it a easy as just giving it the file you wanted to add. Turns out it's like that for Ruby on Rails 3, not for 2.

The most infuriating thing was that if you use the attachments method then the method you called attachments from will not render the default view template it would normally. This means you have to explicitly call the render method.

Instead of putting it out all nicely I'll just link to a blog post from ELCtech.com that explains it well. http://www.elctech.com/ -- [ActionMailer] Multipart emails with attachments