A careful gotcha when referencing VB.NET code from C#

When referencing VB.NET code from C#.NET there is a subtle but very important difference between the two.

This was found when I was trying to access some legacy VB.NET code that used reflection to interact with two classes with nearly the same name. The VB.NET code worked perfectly fine but I was faced with a compiler error when trying to call the code from a new C#.NET project.

The error to watch out for is

CS0234 C# The type or namespace name does not exist in the namespace (are you missing an assembly reference?)

This comes down to the case sensitivity differences between VB.NET and C#.NET. In VB.NET the casing on class names is not important. So a class with the name VbCasingExample is the same as the class vbcasingexample. However in C#.NET these are two very different classes as identifier casing is important.

VbCasing.PNG

Compared to

CSharpCasing.PNG

So be careful when referencing VB.NET code from any case sensitive .NET language.

Capistrano failure to deploy

Had an issue a while ago where running the capistrano deploy common would sorta run capistrano but always got the following error.

The deploy has failed with an error: No live threads left. Deadlock?

The quick and easy answer is to make sure to prefix the cap command with bundle exec. So it should look like

bundle exec cap production deploy

There is some strange behavior here that I have not found the cause of yet. Something about how bundler executes ruby scripts. So just a heads up.

Set ConformanceLevel to Auto but it is already set to Auto!?

Quick post. I'm currently uplifting an old .NET 1.1 app to 4.5 and when trying to run it was getting the following exception.

System.InvalidOperationException was unhandled by user code
  HResult=-2146233079
  Message=Token Text in state Start would result in an invalid XML document. Make sure that the ConformanceLevel setting is set to ConformanceLevel.Fragment or ConformanceLevel.Auto if you want to write an XML fragment. 
  Source=System.Xml
  StackTrace:
       at System.Xml.XmlWellFormedWriter.AdvanceState(Token token)
       at System.Xml.XmlWellFormedWriter.WriteString(String text)
       at System.Xml.Xsl.Runtime.XmlQueryOutput.WriteString(String text, Boolean disableOutputEscaping)

Upon checking my code I verified that my XslCompiledTransform had the ConformanceLevel set to Auto. However I was still getting this error.

My online searches then suggested it had something to do with the XmlWriter. However there was no clear way to find the ConformanceLevel of the XmlWriter. After a bit of digging I discovered that when calling XmlWriter.Create one can pass in an XmlWriterSettings object with a ConformanceLevel property. I added this property and passed the object to the XmlWriter in Create. This solved the issue that I encountered.

The lesson from this is not only did the XslCompiltedTransform.OutputSettings ConformanceLevel had to be set to Auto but so did the XmlWriter.

XmlWriterSettings xmlWriterSettings = new XmlWriterSettings{ConformanceLevel = ConformanceLevel.Fragment};
XmlWriter resultWriter = XmlWriter.Create(memoryStream, xmlWriterSettings);
xslt.Transform(elementToTranform, resultWriter);

Nowhere was the specific change detailed so I think this might be useful for others. Please leave a comment if you find this useful.

Creating Windows Event Log Sources from PowerShell

To help out with some logging in a recent project we needed to organise the Windows logs with multiple sources. A bit of research later and I found a nice and easy way to create these log sources from PowerShell using the New-EventLog cmdlet.

After a few iterations I also put in checks to make sure the event source did not exist before trying to create it and give the appropriate feedback to the user.

function Create-LoggingSources($loggingSources){
Write-HostIndent "Creating logging sources" 1
foreach($loggingSource in $loggingSources.LoggingSource){
$eventLog = [System.Diagnostics.EventLog]::SourceExists($loggingSource)

if($eventLog)
{
Write-HostIndent "Logging Source '$loggingSource' exists" 2
}
else
{
Write-HostIndent "Creating Logging Source '$loggingSource'" 2
New-EventLog -LogName "Sauces" -Source $loggingSource
}

Limit-EventLog -OverflowAction OverWriteAsNeeded -MaximumSize 10240KB -LogName "Sauces"
}
Write-HostIndent "Logging sources created" 1
}

The logging sources are provided in an XML configuration file. $loggingSources is in the following structure.

<LoggingSources>
<LoggingSource>Apple</LoggingSource>
<LoggingSource>Orange</LoggingSource>
</LoggingSources>

I've put together a self contained example of this script you can play with. It will create two new event log sources called Apple and Orange in the log of Sauce. CreateEventLogs.ps1

Checking if supplied domain user credentials are correct with PowerShell

On a recent project we had the problem of creating multiple Windows Services to be run under a single account. So since we did not want to store the password in source control we had our script prompt us for the password. This worked really well until one day we put the wrong password in, and since Active Directory was set up to lock accounts after three bad tries we found we would instantly lock an account every time we put the password in wrong once.

So the obvious solution was to check once that the credentials you had were right before trying to do all this work and stupidly locking an account.

Of course someone had thankfully asked this question before. And thanks to JimB on ServerFault I basically used his entire answer as it did just what was needed. Original answer on ServerFault.

function Test-Login($serviceUsername, $password){
# http://serverfault.com/questions/276098/check-if-user-password-input-is-valid-in-powershell-script
# Get current domain using logged-on user's credentials
$CurrentDomain = "LDAP://" + ([ADSI]"").distinguishedName
$domain = New-Object System.DirectoryServices.DirectoryEntry($CurrentDomain, $serviceUsername, $password)

if ($domain.name -eq $null)
{
write-host "Authentication failed - please verify your username and password." -ForegroundColor Red -BackgroundColor Black
return $false;
}
else
{
write-host "Successfully authenticated with domain $serviceUsername" -ForegroundColor Green
return $true;
}
}
 

Remembering where you've been in Powershell with pushd and popd

The other day I discovered a long existing pair of commands in Powershell that allows one to navigate to a directory and then back to the previous one without having to manually maintain a stack of directories. The two commands are pushd and popd.

A quick bit of searching shows that these commands have existed in Unix shells for many years as well as Powershell since version 2. Wikipedia -- Pushd and popd

Where I have found this really useful recently is in deployment scripts where I need to change the current directory in the script but for usability I want to go back to where the script was first called from should any errors occur or even if the script finishes successfully. By using a try/catch/finally pattern this allows me to put the user back where they started with confidence whenever they execute the script.

try
{
pushd DIRECTORYPATH
# Logic goes here
}
catch
{
# Make sure any exceptions are bubbled up
throw $_
}
finally
{
popd
}

TechNet -- Push-Location
TechNet -- Pop-Location

Good comments instead of bad

One of the refrains that we all hear when the topic of code comments comes up is the refrain “My code is self-documenting.” On the surface this refrain makes sense, why write more than you have to. Unfortunately the way this is usually implemented results in the baby being thrown out with the bathwater. Leaving us in a worse off position than we were with too many comments.

I’ve never met someone who would argue that the code we create should be difficult to understand. That the how the code executes should be hidden or where the flow of control goes. Our code needs to be easy to understand so that as we maintain it in the future we do not have to rewrite entire classes just to add a bit of functionality.

So, what is a good code comment?

Comments should explain WHY” to paraphrase the colleague of mine who gave me this pointer.

Code Comments

How often have you come across some code that works but does something in a crazy way when a much more simple option is clear to you? Only when you implement your simple solution suddenly bugs appear and you finally understand why those lines existed.

An example of a superflorus comment can be found in the .NET Framework Reference Source of the String class. Range check everything. Yes, I can see that’s being done. You’ve obviously thought it important enough to point out but why? Why is it important to note that you are doing all range checks? A comment like this only results in more questions while answering none.

This sort of situation is where code comments become invaluable and will save you and your colleague’s hours in the future. Spend the time to explain why you made the design decisions you did. When you apply a workaround that is strange explain why you did this instead of the more ‘obvious’ solution.

A good example can also be found in the .NET Framework Reference Source. But this time in the DateTime class. That DateTime adjustment underneath the code is not clear at first glance why you would want to add double the day remainder in milliseconds when the time is negative. However when we read the comment above it explains why we would want to do such a thing and even uses a clear example to demonstrate the way.

Commit Comments

This concept of explaining why finds even more value when it comes to commit comments.

How often do you see an odd design decision but when you look into the history of that file the comment is “Added files.” A comment like that is less than worthless because not only does it tell you nothing new but you get irritated or angry and it sits with you as you try to fix whatever is wrong. Again, explain WHY you have made the changes you have. Explain why you chose a pattern or design over another. When writing these comments imagine that 12 months from now you will be coming back to this. When we can barely remember why we made choices a few weeks ago how can we expect to remember why we made choices 52 weeks later?

So to conclude. “Self-documenting code” should be applied to only the how something works. In no way can it show the why changes were made of designs chosen. When you come back months or years later the why is more valuable than any amount of how or what comments.

Using a script to set the Copy Local flag to false

As with my previous post I recently came across a repeatable task that we will probably want to repeat in the future so with my aim from The Pragmatic Programmer I decided to automate it.

The problem was that an architecual requirement of this project was to rely on DependencyInjection for all library references. To help enforce this every project outside of the DI one would require the Copy Local flag on all references set to false.

I started doing this manually but figured out it'd take a long time to go through all 40+ projects and this would happen in the future. So automation time it was.

A quick web search did not show that anyone had solved this problem before so I figured out I would have to learn some Powershell and make it myself.

As csproj files are simply XML I did some research to find out how easy it was to manipulate XML in Powershell. It turned out this is one of Powershell's strengths. However the first implementation had issues with namesapces so I had to use the Select-Xml command introduced in Powershell v2.

Building the XPath queries was fairly simple. The one hiccup to remember is that csproj xml has a default namespace of "http://schemas.microsoft.com/developer/msbuild/2003" so you need to remember to use that and the msb namespace prefix when making your XPath queries. To specify the namespace in Select-Xml you use the -namespace option.

Select-Xml -namespace @{msb = $projectNamespace} -xpath $privateXPath

The next step was saving out the changes. This proved to be an initial roadblock as all the files were set to readonly. As we are using TFS you have to explicitly checkout the files before you can edit them. This resulted in me looking into how to use the TFS command line executable "tf.exe". This proved to be fairly nice as I could simply pipe the collection of csproj files I wanted checked out to a chunk of script that would iterate through the collection and execute the checkout command on each file with the provided TFS credentials.

I explicitly did not attempt to check in the changes as I want the user to review the changes and make sure the solutions are still working. This is something you'd run once a month to make sure the requirement is still being followed.

The final hiccup was that the .NET XML classes Powershell uses has an issue with putting in a default empty namespace whenever you create a new element. This caused the project to fail to load in VisualStudio as the namespace was incorrect. The fix for this was pretty quick and easy. Take the file and replace any occurance of xmlns="" with an empty string. This is accomplished in Powershell with line

(Get-Content $projFilenameFull) | Foreach-Object {$_ -replace ' xmlns=""', ""} | Set-Content $projFilenameFull

So my first non-trivial powershell script was a fun and fiddly dive into scripting all my troubles away. So far so good. ;)

SetCopyLocalInAllCsProjFiles.ps1

Deleting all bin and obj folders from a solution

Quick little post.

Since reading The Pragmatic Programmer by Andrew Hunt and David Thomas I've been looking for ways to automate tasks whenever I find myself doing something I know I'm going to repeat later or I'm repeating right there and then.

The other day I was working on a VisualStudio Solution someone else had started and when trying to build it found they had checked in some of the bin and obj folders.

So I open up the root folder of the solution and prepare to trawl through about a dozen projects to delete all the bin and obj files. Obviously noticing that I'm about to do the same steps repeatedly and this will happen in the future I went and did a quick search to see if anyone else had already solved this.

Awesomely someone had.

So a huge thanks to Glenn at Development on a shoestring for providing exactly what I needed. I'm putting this here just in case his site should disappear and take the knowledge with it.

I threw the following into a powershell script that sits in source control ready for use in the future

# Iterate through all subdirectories and delete all bin and obj folders
# http://blog.slaven.net.au/2006/11/22/use-powershell-to-delete-all-bin-obj-folders/
# Had to use it for getting rid of a bunch of bin and obj folders in a PoC but thought it smart to put in here for other to use
Get-ChildItem .\ -include bin,obj -Recurse | foreach ($_) { remove-item $_.fullname -Force -Recurse }
RemoveAllBinAndObjFolders.ps1

Connecting an iOS device to a mixed 802.11b/n WiFi network

In a recent visit to home I was presented with an old problem that had become a real annoyance. The iOS devices owned by the family (minus the iPad) including my iPhone would not connect to the house WiFi network. They would happily detect them but the moment they tried to connect they would time out.

It took a bit of pecking around and trial and error but I found a single change that then allowed all the devices to connect.

It boils down to only allowing 802.11b to operate on the 2.4GHz band instead of the default dual b/n. It see,ms the iPhone and iPod Touch devices do not play nicely with dual use of that band by the b and n standards.

Event Handlers only firing once in Microsoft Office AddIns

I've just been working on a project where we were to create some AddIns for several versions of Microsoft Office. Now I knew there was a lot of bad blood around Office AddIns but thought they were being overblown as I finished off the 2010 AddIn without so much as a hiccup. The 2007 and 2003 AddIns however showed why Office has the reputation it has.

The problem I ran into was that I had to have several event handlers to catch two events. The opening of a new inspector and a simple button click. So I did what you'd expect to do and registered them in the startup methods.

Initial testing went fine as I started up Outlook and triggered one event, made some changes, restarted it and then tested the other event. It took a while until I tried to test both events following one another at which point I found only one would trigger and then both would even handler hooks would be forgotten and wouldn't rehook in until a restart of the application.

public partial class ThisAddIn
{
    private void ThisAddIn_Startup(object sender, System.EventArgs e)
    {
        Outlook.Explorer explorer = this.Application.ActiveExplorer();
        Outlook.Application app = (Outlook.Application)explorer.Application;

        app.NewInspector += new InspectorsEvents_NewInspectorEventHandler(Inspectors_NewInspector);
    }
}

After much searching I began to come across implications that the garbage collector was removing the references after the first event. I was at a loss at what to do until I came across another discussion where someone was having a similar problem and the response was to save the object in a class level variable to avoid the garbage collector from removing it.

A quick edit and some testing showed this to work reliably. So, if Office is only triggering an event once make sure there object references are stored somewhere the garbage collector won't go. And make sure to assign the object before you register the handler or the garbage collector will still find it.


public partial class ThisAddIn
{
    public Inspectors _appInspectors;

    private void ThisAddIn_Startup(object sender, System.EventArgs e)
    {
        Outlook.Explorer explorer = this.Application.ActiveExplorer();
        Outlook.Application app = (Outlook.Application)explorer.Application;

        _appInspectors = app.Inspectors;
        _appInspectors.NewInspector += new InspectorsEvents_NewInspectorEventHandler(Inspectors_NewInspector);
    }
}

Starcraft II Patch Download Problem

Since getting Starcraft II I have bashed my head against the uselessness of the Blizzard Downloader. As is common knowledge that Blizzard uses the bittorrent protocol to transfer its patches and you might thin that it's pretty damn penny pinching of them to do it instead of just doing direct downloads. Unfortunately their bittorrent sucks from my repeated experience.

On both the 1.0.1 and 1.0.2 patches the downloader stopped part way through the download reporting the error "There were multiple problems saving data." As you can tell it offered up no useful help. A quick Google also turned up very little other than the basic and generic Blizzard Downloader FAQ page. After trying all of the suggestions I had got nowhere except swearing at Blizzard for not putting out a direct download patch.

To keep this short a few days later I tried downloading the patch files into the Updates folder using utorrent instead of the Blizzard downloader and five minutes later all the files had been downloaded. The Updates folder is located in the Starcraft II install folder and is only there when there's a patch needing to be downloaded. Starting up Starcraft II again it detected all the files were present and happily patched itself up.

So if you're having trouble downloading the Starcraft II patches wait a couple of days and try using a different bittorrent client to download the files. You'll find the bittorrent files near the Updates folder in the Starcraft II install folder.

Playing with Regex on OSX

RegExhibit

If you've ever been stuck with the problem of trying to build anything but a simple regular expression you know how painful it can be getting it to match just what you want.

When developing for .NET on Windows I was introduced to a brilliant free tool called Rad Software Regex Designer that gave you the ability to provide an example of the text you wanted to match and an area to slowly build up your regular expression while getting instant feedback on what it was doing. It even has dialogs to add specific regular expression commands in case your proficiency with regular expressions isn't high or you just forgot how to create a non matching group. After moving to OSX for work I went looking for a similar tool for the Mac. And after a while I found it.

RegExhibit is a GUI tool of OSX that uses the Perl regular expression library to help you build regular expressions. This should be fine for any other languages that use a PCRE library but make sure you check before deploying. The core part of the program are two text areas. you place an example of the text you want to match into the lower area and build up your regular expression in the top are. There are even tabs for doing matches and splits but you'll likely find yourself in the match tab for most of time. However it doesn't offer the same built in dialogs like the Rad Software Regex Designer so make sure you've got a regular expression reference handy.

This is a great tool that has saved my sanity several times already and I do recommend to anyone that has to play with regular expressions and is developing on the Mac.

You Can Die in the Tutorial

Well I've now finished the Military career mission arc. Pretty solid introduction to PvE'ing touching on resists, aiming, firing, moving, etc. It finishes up with a mission that you will be destroyed in unless you were paying attention. It's nice to know some of the true nature of EVE makes it into the tutorial stuff. Also got a new frigate gifted as part of one of the missions. I am now flying a Tristan. Depending on what happens in the near future I may ditch it and get myself a Catalyst.

Now I'm off to a Sisters of Eve agent for what I believe is the start of the beginner epic arc. Never done an Epic arc so this should be quite interesting.

DC++ "Could Not Open Target File" Error

I was at a LAN the other week and to make sharing files easier we use a program called DC++. Shortly after trying to download the first few files I got a confusing error. "could not open target file: the system cannot find the path specified" After spending much time looking around the web and asking friends at the LAN we finally figured out what it meant.

In the Settings window and under the Downloads section you designate two directories, one is a default download directory and the other is an unfinished donloads directory. I had these two options set to directories that did not exist, a quick change however did not quite work as while the folders exist they were under an account the active account could not access, a final change pointed these options to two directories that existed and the account could acces. This fixed the problem right up.

So here's a little picture to show the two offending text boxes. Set these to folders your account can access, folders on your desktop for example, and that error should disappear.

My Borderlands Troubles

Being a Good Consumer

Last week I purchased the Borderlands DLC, Zombie Island of Dr Ned, for my PC Retail version of Borderlands. After purchasing it I was informed I would also have to install the 1.1 update so that was downloaded as well.

We've Got a Problem

My first attempt to update Borderlands resulted in a catch-all error dialog informing me that the update had failed. After looking on the Gearbox forums I got pointed at running the update from the command prompt using the "msiexec" program. http://gbxforums.gearboxsoftware.com/showthread.php?t=87233 - (FIX) Fatal Error:Installation ended prematurely because of an error. This resulted in the patch doing the same "Gathering information about your computer" routine as the original patch attempt and silently exiting without doing anything. A quick check on the "msiexec" program's help dialog showed a logging flag which I set and posted the output of that and the dxdiag output on the Gearbox forums. http://gbxforums.gearboxsoftware.com/showthread.php?p=1721935 - Unable to Install Patch or Zombie DLC - Windows 7 - Error 1603

You Didn't. Did You?

This is when things got interesting. Soon after I read a new post of someone having a similar problem as me and posted a suggestion to run the "msiexec" program with the logging flag set. A reply post by another member to mine said that would be pointless as "we know where the patch is try to write to" This gave me an idea which prompted me to look through the "msiexec" log to try and answer. A quick search for the drive that I installed Borderlands to (E:) found nothing. A search for the 32 and 64 bit Program Files directories on the C drive found both. Now my question was a pretty simple one, did the patch and DLC require that the game be installed in the Program Files directory on the C drive? A question in the FIX thread brought no answer after several days so I decided to give it a go and reinstalled Borderlands to my "C:\Program Files(x86)\" folder.

You Did

And guess what, the patch and DLC installed without a hitch.

For a game developer to not allow the patch to be installed to wherever the program has been installed is just unacceptable when every other game I have patched has never required it.

My First Mac

Well I decided I needed a new laptop to replace the one I've had for about two and a half years. So seeing all the stuff for the past few years about how awesome Mac's are I decided to splash out and get a 13" 2.26GHz Macbook Pro.

 

Positives:

  • Very nice body construction. The aluminium unibody is one of the major things that drew me in.
  • Multi touch touchpad. The two finger tap for right click is nice.
  • Battery life. Compare to the 2 hours at best for my old laptop the 6+ hours is great.
  • The little LEDs and button on the side for displaying battery level. Very nice user experiance touch.

Negatives:

  • No lights displaying hard disk access. Not a big thing but I found it nice knowing if the hard drive was tharshing itself to bits.
  • No fullscreen on Firefox. Silly Apple for having a usability practice of disallowing this. 
  • Getting used to how Apple uses the taskbar. Or in their case, the Dock. I'll get used to it but it's quite different to how the *nix distros and Windows do it.
  • Lack of middle click. I use this heavily for opening tabs. Comand-Click works though.

 

On the whole an interesting piece of machinery and I may install a *nix distro on it one of these days.

 


Edit: Moved one of the positives to negatives as it was in the wrong place.