Andre's Blog

Personal blog of Andre Perusse

Nikon 18-200mm lens vs 70-300mm lens

After I bought my first SLR camera (actually a DSLR - the Nikon D40) four years ago, I was eager to obtain a telephoto lens since I'm anti-social and don't like to get too close to my subjects. Actually, I sometimes like to pretend I'm a nature photographer and wild animals would never let me get close enough with the 18-55mm kit lens that came with the camera. So I picked up Nikon's 70-300mm AF-S lens in April 2007 and have been very happy with its performance. The only problem is that it only goes down to 70mm, and it seems like I'm forever switching lenses, which is quite a nuisance juggling lens caps and trying to keep dust from intruding into the innards of the camera body and lenses.

So, when I recently upgraded my D40 to a new D3100, I thought I would also get myself Nikon's highly-regarded 18-200mm lens at the same time. This one lens covers almost the same range as my other two lenses combined, with the obvious advantage that I never have to swap out lenses. Now, I had read several reviews prior to buying the 18-200, and I was well warned that while it was an extremely convenient lens, it came at the cost of some image quality over superior shorter range lenses. I even saw a side-by-side comparison (with a 55-200 lens, not a 70-300) from Camera Labs which showed poor corner sharpness, but generally acceptable center sharpness, at least to my eyes.

Now, I am not really a critical photographer and I certainly don't make money taking pictures - it's just a personal hobby. However, I was quite alarmed at the poor showing of center sharpness of the 18-200mm lens at 200mm vs my 70-300mm at the same focal length in my own unscientific test. See for yourself below. Though my backyard in the winter may not be as aesthetically pleasing as other picturesque samples, this mossy rock provides a lot of edges and surface detail for an excellent sharpness comparison:

 

 

The above photo is from my new Nikon AF-S 18-200mm VRII lens at 200mm.

 

 

This picture is from my Nikon 70-300mm AF-S VR lens at 200mm. Notice how much sharper the details are in the are of the moss on the rock, and the surrounding leaves.

So, I was originally intending to sell my 70-300 lens to help pay for the new, ultra-convenient 18-200 lens, but now I'm not so sure. Though I'm not a professional photographer, I'd hate to lose the relative sharpness of this lens because, well, its better than the 18-200 lens. I had hoped that the 18-200 would at least be acceptable to my eyes, but this comparison is telling me a different story. I'm going to have to think about this before I put my 70-300 up for sale. Bummer.

 

 

Core i7-860 PC Build 2009

After using my Intel Core 2 Quad Q6600 machine for just over two years, I decided it was time for an upgrade. So, a few weeks ago I ordered my new Core i7-860 PC (some assembly required, of course). I was able to keep my four-year-old Antec P180 case (I love this case) and two-year-old Corsair 620 watt modular power supply, but everything else got replaced (oh, except for my trusty LG DVD writer - that stayed too).

I had been thinking about getting a Core i7-920, but the newer P55-based Core i7-860 seemed to be a slightly better value, especially considering that I have no plans to install mountains of RAM or run dual graphics cards, to which the i7-9xx line and associated X58 chipset is better suited. Since I always overclock my systems anyway, the 860 was a better choice than the 870, and hundreds of dollars cheaper, too.

So, I ended up with the following build:

Motherboard: ASUS Maximus III Formula
CPU: Core i7-860
CPU Cooler: Noctua NH-U12P
RAM: 8GB OCZ DDR-3 1600MHz CL-8
System Drive: OCZ Vertex 120 SSD
Data Drive: Seagate 7200.12 1TB HD, er, Seagate 7200.11 1.5TB HD
Graphics Card: Sapphire Radeon 5870
OS: Windows 7 Ultimate

The Core i7 is a no-brainer: it's a thoroughly modern CPU and Windows 7 is designed to take full advantage of it. It has great power-management, and the ability to throttle or burst individual cores based on system load. And it's wicked fast!

The motherboard took a lot more research before I made a decision. I'm not a PC gamer, so it probably looks a little odd that I would choose a high-end gaming board. While gaming doesn't interest me much, I do like quality and the Maximus board just oozes it. I hate owning boards that have peculiar quirks, and while it's almost unavoidable than any board is going to have the odd issue or two, I figure it's worth the extra bucks to buy something that's over-engineered. And certainly, the Maximus is not trouble-free, but it's the most reliable board I've ever owned.

As for the CPU cooler, I had wanted a Thermalright Ultra-120 Extreme (TRUE) but there were none in stock when I placed my order. So I got the Noctua cooler instead. It's very well built and includes two 120-mm fans with optional cables to regulate the fan speeds to trade off cooling performance (RPMs) vs noise. Having nothing else to compare the cooling performance with, I can't really evaluate the Noctua's performance properly. But, I was hoping for a bit better cooling - my CPU is overclocked to 3.8GHz and idles around 38 - 40 degrees Celsius. I haven't played with any other settings than BCLK, so maybe I can get the CPU voltage down, but I have seen idle temps in the high 20s on other coolers, so I am a bit disappointed here. On the other hand, I had no problem with this cooler blocking the RAM slots - the OCZ RAM doesn't have any fancy heatspreaders so they are not especially tall anyway.

I'll skip the SSD drive for a moment since it was the last component in my build. While I was waiting for it to arrive, however, I installed Windows 7 on the Seagate 1TB drive I had originally bought to use as my data drive. The Windows install went fine, but about 3 hours later the drive started ticking. And while it ticked, all hard drive operations were either suspended or terribly slow. The drive ticked for about 20 minutes and then stopped, at which time hard drive performance returned to normal. However, I no longer trusted the drive. It did this ticking thing a couple more times so it went back to the store. Seagate has had some serious problems with its 1TB drives lately, so instead of getting a straight replacement I opted instead for a Seagate 1.5TB drive. I've now been using it for about a month without any hiccups whatsoever.

The Sapphire Radeon 5870 works very well, as you might expect. Again, I'm not a gamer but I like having capable gear for the two times a year I haul out Flight Simulator X. I especially like the 5870 because of its terrific power management which means it doesn't consume gobs of juice while its just sitting there idle displaying the Windows desktop. I also like the fact that the stock fan is nearly silent at idle - I don't like my rig to make a lot of noise. Though sometimes (rarely) when the machine wakes from sleep the Radeon fan powers up to full speed and it is VERY loud. Once I log back into Windows, however, the fan returns to normal.

Up to this point, I hadn't installed my OCZ Vertex SSD drive yet (thanks OnHop). And while the rig was plenty fast, it wasn't a significant improvement over my Q6600 that it replaced. But after I put in the Vertex, it was a whole new ball game. Putting an SSD drive in your computer as the boot and system drive is like supercharging your entire machine. This thing is blazingly fast now! Windows 7 boots in just over 10 seconds. Shutting down takes around 4 seconds. Waking from sleep is nearly instantaneous. And most day-to-day operations performed on the computer desktop experience little to no lag at all. It's like removing a clog from a drain - everything just flows much more smoothly. The Vertex was an expensive component, but it is the best upgrade you can give your computer and worth every penny.

This rig is hands-down the fastest, most stable, and most trouble-free PC I have ever built. It takes less power than my Q6600/Radeon 2900XT machine and runs cooler, too. I am extremely impressed and pleased with the results of this build and I look forward to approximately 24 months of "surfing the web at 50,000 frames per second" when it'll likely be due for another replacement.

Left High 'n' Dry By OnHop.ca

I recently decided to build myself a brand new computer system based on the new Intel P55 chipset and a Core i7 860 processor (I'll be blogging about that entire experience shortly). While I was at it, I decided I would dive into the wonderfully speedy world of SSD drives, too. Unfortunately, my local retailer wasn't able to source my preferred drive (an OCZ Vertex 120GB) and my usual online retailer (ncix.com) had no stock.

I decided to try OnHop.ca since they had the lowest price on PriceCanada.com and they had stock. They also have favourable reviews on PriceCanada, so I figured I was safe. The product page for the drive listed the item as "in stock" and a delivery estimate of 1 to 2 days. I ordered the drive late on a Tuesday night, so I figured it would arrive by Friday which would have worked out great since I had put aside the entire weekend to build my machine.

But Friday arrived and there was no drive. Boo. I was disappointed but well, these things happen so I decided to be patient. By the following Tuesday there was still no drive, so I sent an inquiry to OnHop to ask about my order. Wednesday morning I got a response back claiming that the shipping company had "lost" the order, and that OnHop was shipping out a new unit immediately, and that I would have it the following day. Thursday came and went, and so did Friday - no drive. So, weekend number 2 and I'm still unable to assemble my system.

At this point, I'd had enough and dispatched a strongly worded (but not abusive) email to OnHop demanding my money back, since their order fulfillment process was apparently horribly broken. Naturally, I had to wait until Monday afternoon for a reply, and while they complied and immediately refunded my purchase, I was rather shocked by their laissez-faire attitude about the whole matter. They claimed my order had been "delayed" because the item they had in stock was a "defective unit." I guess they only had the one defective item as stock, but why would a defective unit count against their sales inventory? And why did it take almost 2 weeks to discover the error? I was expecting a bit more along the lines of "sorry we screwed up - please allow us to fix it and keep you as a customer" but no, here's your money back, now get lost.

Anyway, I'll never order from OnHop again, and I've since reordered the drive from trusty ol' NCIX since they now list it back in stock. Here's hoping it arrives by THIS Friday.

Windows Home Server Saves The Day!

I have never been very good about backing up my data at home, and I have occasionally been bitten by the odd hard drive failure. However, most of the data I lost to these infrequent failures wasn’t too terribly important. These days, however, I have gigabytes of photos and several years worth of documents and taxes on my PC, so backups have become critically important. So, last year I picked up an HP MediaSmart EX470 Windows Home Server (WHS) to help fortify my backup strategy (and it serves as a NAS too, which is very nice). Naturally, I upgraded the processor and RAM, and added a 1 TB drive.

The WHS box has been performing its job since then, quietly backup up my desktop and my wife’s laptop every night. During the early hours of the morning, it wakes both machines from their sleeping state, backs up the hard drives, and puts them back to sleep. You’d never know it was doing anything, which is precisely how I like this technology to work.

Yesterday, the 160 GB Hitachi hard drive in my wife’s 2-year-old laptop up and died. I have read how “easy” it is to restore an entire hard drive when using WHS, so I was looking forward to a painless operation. I swapped out the drive with a brand new one, and then went to prepare the WHS “Restore CD.” Using my working desktop, I went to the “Software” share on my WHS box to find the ISO of the Restore CD. However, in that folder was a ReadMe file that told me the ISO was “out of date” and that I had to download the newest version from Microsoft’s web site. I would have thought since the the ISO file is only 230 MB, that it would have been automatically updated by Windows Updates, but I have to manually get it myself. Oh well, no big deal. I download the new file and burn it to a CD.

I pop the CD into the laptop and it sounds like an old floppy drive grinding as it takes 5 minutes to boot up. It tells me it’s searching for the WHS machine and apparently finds it as it asks for my WHS password. I enter it, and after a 20 second wait, it tells me “general network error” and that it can’t contact the WHS box. Lovely. So for the next 15 minutes I fart around with network cables and switches figuring I’ve got some kind of DHCP/DNS issue. I come to find out that the WHS admin console doesn’t even work from my desktop anymore, so I bounce the WHS machine and everything is fine now. I have no idea what that was all about.

Anyway, back to the laptop and another 5 minutes for the Restore CD to boot and now I’m through to selecting the machine from the WHS backup catalog that I want to restore. At this point, I was hoping it would figure out that the new hard drive was EXACTLY the same size as the old one and just tell me “I’ve got it from here – go get a coffee and come back in 30 minutes” but this was not the case. It pops up a window telling me that I have to “initialize” my new hard drive first, using the Windows Disk Manager. I’m no stranger to the Disk Manager, but there is no “Initialize” operation in Disk Manager. I come to figure out (after two attempts) that “initialize” means partition and format (not just selecting MBR vs GPT as the partitioning method) – I wish it had told me that in the first place. Again, I was kind of hoping WHS would do this automatically for me.

So I create one large C: partition on the entire drive and proceed to the next step. It tells me to select a source hard drive backup from WHS and a destination partition on the laptop. There are THREE source hard drives listed for the laptop – a manufacturer’s “SYSTEM VOLUME” at 1.5 GB, the large C: drive, and an 8GB D: drive. The “SYSTEM VOLUME” and D: drives are obviously manufacturer partitions set up either to help with some kind of manual hard drive wipe and restore, or maybe for watching DVDs or listening to CDs without booting into Windows (some laptops have this capability, though neither me nor my wife have ever had any occasion to try it out on this laptop). I proceed with just restoring the C: drive because I don’t want to futz around again in Disk Manager trying to get the partition scheme recreated, and the mysterious “SYSTEM VOLUME” doesn’t even have a drive letter, so I don’t know how I would deal with that anyway.

So the C: drive begins to restore. It tells me it’s going to take 4 minutes. Wow, I knew she didn’t have a lot on the drive, and I have a gigabit network but that’s still really fast. After 30 seconds, it tells me it’s going to be 5 minutes. Another 30 seconds and we’re up to 10 minutes. Then 32 minutes. Ah, the good ol’ Microsoft progress bar that goes backwards before it goes forwards. Seriously, it’s 2009 – is it that hard to have a progress bar that works, or at least makes sense? (As a developer, I know programming progress bars is a pain, but it CAN be done!)

I never know how long it really takes because we go out for an hour, but when we return home the restore has completed. I reboot the laptop and it’s like there was never a problem – it looks and works exactly the same as it did before the hard drive died. And this is all worth it. Though I think the restore process could be much easier with a great reduction in the number of steps required, getting a machine back to EXACTLY the way it was before a hard drive crash is simply amazing. My usual method for restoring a machine is to re-install a fresh copy of the OS, then re-install ALL the applications, then manually configure all the settings for the OS and apps, then restore all the data files from whatever backups I happen to have. This process usually takes days and is never 100% completed. Compared to the old way, WHS is a MAJOR step forward. I simply think the WHS restore process could do with a little fine-tuning.

Comparing Team Build and Cruise Control

In recent months, I’ve had the opportunity to set up two build servers; one using Cruise Control.NET and the other using Team Build (part of Team Foundation Server). In this article, I’d like to chronicle the strengths and weaknesses I’ve found in both systems in my particular environments. My experience with each build system is admittedly limited, but I hope the information below might prove useful to someone. Note that the source control system in both environments was Team Foundation Server 2008.

Why Have a Build Server?

Before I compare the two build systems, I should first point out the advantages of having a build server. In my case, the build server provides three essential services:

  1. Continuous Integration. As soon as a developer checks in some new code, a build is performed to ensure that the new code doesn’t break the build. If it does, everyone knows about as soon as possible so it can be fixed. No more waiting for integration builds to find out that there’s a problem.
  2. Automated nightly builds. Nightly builds provide your QA people (if you’re lucky enough to HAVE QA people) with a regular, reliable heartbeat of incremental builds. QA knows that every morning there will be a new build they can test.
  3. Reliable, repeatable installers. Our build server also outputs an MSI installer that allows the software to be installed much as if it was a shipping product. If someone needs a new installer or wants to search for a particular build, the build server is the place to go. If you don’t HAVE a build server, you need a developer to perform a build on their workstation instead and if there’s no developer available, you’re out of luck.

I believe build servers are an essential part of modern software engineering – if you don’t have one yet, get one set up as soon as you can.

Cruise Control.NET

Cruise Control is certainly the most popular build system available. It’s an open-source solution that has been around for years, works with all kinds of source control and repository systems, and has a huge community of users and plug-in developers. A spin-off version especially for .NET developers (herein called CC.NET) is managed by ThoughtWorks and is available from their site. CC.NET has a nice web-based dashboard for viewing the status and history of builds, as well as allowing control over the build process. A single CC.NET server can be configured to build multiple projects.

I think one of the strongest reasons to pick CC.NET is its maturity and resulting pervasiveness. It’s been around for what seems like forever, and if you’ve ever had to deal with a build server in either a Java or .NET environment, chances are you’ve used it already. In addition, the CC.NET plug-in developer community has written solutions for just about every conceivable problem you might have. There is a wealth of knowledge on the Internet on how to set it up and get it working, so it’s hard to go wrong with CC.NET.

Something that’s both a strength AND a weakness with CC.NET is that it's not part of the Team Foundation Server (TFS) ecosystem. This is a strength in the sense that it allows you some flexibility with the CC.NET server’s configuration. A single CC.NET server can work with multiple TFS installations (and/or other source control providers) and support multiple projects. In my particular case this was a huge reason to use CC.NET. As you’ll see below, Team Build has some restrictions on the account used to run the Team Build service, while CC.NET allows you to communicate with the target TFS server by simply specifying a username and password with TFS access. This “disconnection” with TFS is also a weakness in that the central TFS server has no idea what build environments are out there. There is is no automatic central storage of build profiles and definitions, and no way to automatically interact with build information from within the development environment. Once, one of my colleagues modified his build script to delete all files in a particular directory. Unfortunately, when it ran, it resolved that directory to the C:\ drive root folder, completely obliterating his build environment. The problem was exacerbated by the fact that the CC.NET build configuration had never been backed up so he had to start all over from scratch!

Another thing that I would list as a weakness for CC.NET is the fact that it uses its own XML schema for controlling builds. This means that you have to learn this schema inside and out in order to make maximum use of all CC.NET has to offer. While many may see this as a minor point (and it really is) it is nonetheless an additional barrier to getting CC.NET installed and configured to perform your builds.

Team Build 2008

Team Build is a service that comes with Team Foundation Server. Having used CC.NET before, I initially found the architecture of Team Build very confusing. Instead of setting up a build server, creating a build definition ON the build server, and pointing it at your source control server, Team Build handles things very differently. First, you install Team Build on a computer. That computer then becomes a “Build Agent” that the master TFS server can talk to. This is a major architectural difference: with CC.NET, the CC.NET server initiates communication with TFS during a build; with Team Build, it’s TFS that initiates the communication with a build agent of your choosing.

Because of this architecture, your build definitions don’t actually live on the build server. The build definitions are part of your TFS project and are under source control themselves. This centralizes the build definitions for the project where they can be deployed to any build agent in your environment simply by selecting that agent when you perform a build. Well, when I say “any build agent” I mean any build agent that has any dependencies installed on it that your particular project might need. In my case, that means a couple of MSBuild extentions, which isn’t very heavy-weight as far as dependencies go – they can be installed in about 30 seconds.

So, the centralized storage of build definitions can be a very good thing. Looking back at the problem my colleague encountered when his C:\ drive was deleted, well, that could still happen with Team Build but at least the build definition would still be intact since it is under source control. However, this centralized storage of build definitions has a downside, too – all build agents (i.e., build servers) must be on the same Windows domain as the TFS server, or at least on a trusted domain. In addition, the Team Build service must run under a domain account (so you need the password for this domain account when you set up Team Build). This can present some problems if you’re in an environment, like mine, where IT controls the TFS server and the domain it’s attached to, and getting your build server attached to the same domain requires requisitions, approvals, security checks, etc. To be sure, this can a royal pain in the neck and is where I see one of CC.NET’s biggest advantages. You can put CC.NET ANYWHERE and have it talk to TFS with no problem. Of course, you’ll need to put a TFS username and password in clear text in your CC.NET configuration file and that or may not be a cause for concern in your environment.

Another advantage of Team Build (sort of) is that it uses the same XML language to control builds as is used in your existing Visual Studio projects. Specifically, it uses MSBuild to define and control the build definition. So, if you’re already familiar with MSBuild, you can figure how to configure Team Build fairly easily. I was NOT familiar with MSBuild and I had a helluva time getting my builds to work the way I wanted, but then again, I was trying to do some fancy things with automatic assembly version number increments and integrating a couple of WiX installer projects. Like CC.NET, MSBuild has a community of extension developers and while I haven’t performed a formal comparison of the two communities, I would expect CC.NET to have a lot more available simply because it’s been around so much longer.

A major disadvantage of Team Build is a pretty heavy dependency in some cases. If you have created unit tests using the built-in features of Visual Studio, and you want to run those unit tests as part of your build, well you have to install Visual Studio on your build machine. Yuck. The same is true if you have a Visual Studio Database Edition project – you must install that edition of Visual Studio too if you want to build that project. I’m really hoping for a solution to this requirement in the next version of Visual Studio and TFS.

Lastly, while CC.NET has a pretty spiffy web dashboard for status display and build control, Team Build has its status and control built directly into Visual Studio. While this means that you can interact with your builds without leaving the development environment, it also means that anyone WITHOUT Visual Studio can’t see anything (unless your TFS server has Team System Web Access installed).

There’s obviously a lot more involved in the comparison of these two systems, but I hope this blog post gives you someplace to start. Check out the resources below for more information.

Related Resources

If you’re trying to decide which build system to go with, I suggest you check out the following resources which contain some additional helpful information:

Cruise Control .Net vs Team Foundation Build on StackOverflow.com

Buck Hodges’ Blog (TFS Development Manager)

Jim Lamb’s Blog (TFS Program Manager for Team Build)

MSBuild Extension Pack

MSBuild Community Tasks Project

Build Engineering = Sorcery

Most of my software engineering career has been involved with internal intranet applications. Since applications were typically only deployed once on one application server, I’ve never had the opportunity to explore the wondrous world of build engineering. For the last two years, however, I’ve been working for a software vendor and build engineering is very much an important part of the development process. And until recently we’ve had a dedicated build engineering department that looked after most of the nasty details involved with this tricky voodoo science. But that department is now gone, and my team has to look after its own build requirements.

For the past 4 to 6 weeks, I’ve been delving deep into the bowels of MSBuild, the technology provided by Microsoft to perform builds of .NET projects. When you select “Build” from within Visual Studio, it’s calling MSBuild to do the work and generally Visual Studio shields you from all the intricacies of this build engine. However, when you need to perform automated builds, properly version your output assemblies, and automatically create MSI installers, you really need to learn how MSBuild works. And let me tell you, it is pure sorcery. You have to a be a bloody wizard to understand how all of these pieces interact with one another and how to control them to do your bidding.

At the root of the problem is the fact that the build engine and pretty much every artifact that interfaces with it is represented by an XML file. I used to love XML files – it solved so many of my problems trying to store and retrieve structured data where a SQL database was not available or suited to the task at hand. Now I’m starting to develop a true resentment of this once heroic technology that has been twisted to perform evil. It pretty much boils down to this: XML is being used to specify the granular tasks that must be performed to compile and output your entire solution, including all the projects, and all the classes and other files included in those projects. XML in this case is not being used to represent data storage, but instead is being used as a primitive command and control language. Individual XML nodes can be used to set variables, to check conditions, and to execute tasks. To create these XML files, you must be a master of both the XML schema available to you, as well as how the actual build engine interprets the schema. Though Visual Studio provides some rudimentary Intellisense when editing XML files, all it can show you is a list of valid elements and/or attributes. And there is NO compiler for these XML files – the only way to see if what you’ve written is syntactically valid is to actually run it! Holy smurf, I feel like I’m back in 1983 or something.

To complicate matters further, MSBuild and its related technologies (Visual Studio, WiX, Team Build Server) just LOVE to scatter all kinds of environment and preprocessor variables throughout these XML files. While extremely helpful when performing tasks, these variables are damn near inscrutable – the development environment provides you little assistance with determining what variables are available in any given location, what they mean, where they’re coming from, and what process is setting their values. You must scour through reference documentation, online forums, and the occasional helpful blog to find out just what the hell is going on at any point in time. Writing XML code to control your build and your installer is very much like reciting some obscure Latin incantation to perform some magic. You just have to hope you’ve said it in just the right way and with the proper inflections so that that the build gods will oblige your request. It’s enough to drive anyone completely batty.

Now, where did I leave my wizard’s hat?

Working with Active Directory in .NET 3.5

Many projects that I’ve worked on over the years have required some kind of interface with Active Directory. Back in the good ol’ ASP days, there was ADSI (Active Directory Services Interface), and .NET uses the System.DirectoryServices namespace to essentially wrap ADSI with managed code. It’s been a long time since I worked directly with ADSI, but working with AD up to and including the .NET Framework 2.0 was never a very straight-forward task.

Take for example the classic requirement of simply obtaining the current user’s full name from AD. Say you have a web site that uses either IIS/NTFS to protect pages using ACLs, or uses ASP.NET Forms Authentication with an AD Provider. Obtaining the user’s login name is relatively easy, using the Page’s User object:

string loginName = User.Identity.Name;

But getting the user’s full name from AD requires several lines of code involving DirectorySearcher and SearchResult objects:

string firstName = null;

string lastName = null;

 

DirectoryEntry entry = new DirectoryEntry();

DirectorySearcher searcher = new DirectorySearcher(entry);

searcher.PropertiesToLoad.Add("givenName");

searcher.PropertiesToLoad.Add("sn");

searcher.Filter = "(&(objectCategory=person)(samAccountName=jsmith))";

SearchResult result = searcher.FindOne();

if (result.Properties["givenName"].Count > 0) firstName = result.Properties["givenName"][0].ToString();

if (result.Properties["sn"].Count > 0) lastName = result.Properties["sn"][0].ToString();

Ick. Thankfully, .NET 3.5 has added the System.DirectoryServices.AccountManagement namespace which abstracts most of this code and makes it super-easy to deal with AD Principals in a strongly-typed manner:

PrincipalContext pc = new PrincipalContext(ContextType.Domain);

UserPrincipal user = UserPrincipal.FindByIdentity(pc, "jsmith");

string firstName = user.GivenName;

string lastName = user.Surname;

Gotta love progress!  :-)

Home Theater PC

I love having the ability to play computer video files on my living room television. Back in the old days (about 3 or 4 years ago), I used a modded original XBox with XBox Media Center (XBMC) to perform this task. XBMC was great because it worked with just about every video format out there. In fact, I don’t think I ever ran across a video file that XBMC wouldn’t play. Ahh, those were the good ol’ days.

Unfortunately, the original XBox doesn’t have the horsepower necessary for playing today’s high-definition video files. Though I now have an XBox 360, the only high-definition video it will play is Windows Media or MPEG files. Oh, there are many solutions for performing on-the-fly transcoding from your PC for unsupported files, but I haven’t seen any that preserve multi-channel surround sound along with that transcoding. And modding the 360 may be possible, but it doesn’t seem as popular as the original XBox mods, so I’m not going down that dark alley.

Instead, I recently bought and built a relatively cheap home theater PC (HTPC). Building a PC for dedicated home theater use isn’t a new idea – people have been doing it for years. But the concept has recently spiked in popularity with many generic components now available allowing people to build some pretty nice systems for not too much money. Still, putting a working HTPC together is not for the weak-hearted as the industry still has a long way to go before the technology is mature. At the outset of this project, I was worried that I wouldn’t be able to achieve the usability I wanted because of several factors: home network speed, audio/video codec configuration issues, integration with my Harmony remote control, proper display on my aging high-def TV, and more. In the end, it turned out to be a lot less effort than I anticipated.

The system I put together consisted of the following:

  • Antec NSK2480 mATX chassis (has no IR sensor or front-panel display)
  • Gigabyte GA-MA78GM-S2H mATX motherboard with integrated ATI HD 3200 graphics
  • AMD Athlon 5200+ EE CPU
  • 2GB Crucial PC6400 RAM
  • 500GB Western Digital Hard Drive
  • Windows Media Center remote control

The entire package cost me less than $500. I chose this chassis because it looks like a piece of audio/video gear and fits nicely in my A/V rack. It was inexpensive, comes with a nice power supply, and has quiet case fans. This motherboard is a popular HTPC choice because of its micro-ATX form factor, integrated ATI graphics which handles high-def video with aplomb, the built-in HDMI connector, and built-in optical S/PDIF output connector. I was originally going to get an Athlon 4850 CPU, but there were none left in stock. So I got the slightly more power-hungry 5200 for only $10 more. The Windows Media Center (WMC) remote control was only $30, and I only bought it for the IR receiver that it comes with (since I planned to used my Harmony remote instead).

Assembling the PC was a breeze, so I had no issues there. After installing Vista Ultimate as the OS, it was time to start getting my hands dirty with codec configuration. Before I get into that, I should mention that I was pleasantly surprised to find the WMC remote could be used to place the machine in and out of sleep mode. Excellent! Unfortunately, the IR code that is used for this also turns my XBox 360 on and off. I haven't yet resolved this minor annoyance.

Back to the codec thing. In preparation for what I assumed would be a nightmare of configuration trial and error, I spent some time over at AVS Forums researching the recommended installation incantations. Though many senior members strongly recommended against installing pre-packaged “codec packs,” several others indicated they had very good luck with Vista Codec Pack (VCP). So, I tried out VCP and what do you know – it worked great! All my high-definition content (including the relatively new .mkv format) worked like a charm in Vista Media Center! I had some small issues getting DTS and Dolby Digital 5.1 surround sound formats to output properly to my surround processor, but I had that working within 15 minutes. So, the codec nightmare I had feared turned out to be a non-issue after all.

I did have one video glitch, however, but it wasn’t a surprise. My TV is a circa 1999 Toshiba rear-projection HDTV set. When this thing was built, HDMI wasn’t even a gleam in some video engineer’s eye. All I have are component inputs on the back of my set. I did buy an HDMI-to-component converter off eBay that works fairly well, but it appears to introduce a slight horizontal and vertical offset that unfortunately cannot be corrected with ATI’s driver software alone. Thankfully, the multi-talented Swiss-army knife of video signal software, PowerStrip, was able to remedy this for me.

After all this, I was pretty happy using Vista Media Center to browse my server for video files and play them back over my gigabit home network. I can completely control the HTPC and VMC with my Harmony remote control, and everything just works. However, having used XBox Media Center for years, I knew there was a better way to control the HTPC than with VMC. A friend at work told me about MediaPortal, a sort-of spin-off from XBMC for Windows machines. MediaPortal is somewhat more slick than VMC with my personal killer feature being the integration with the Internet Movie Database (IMDB). Instead of browsing video files alphabetically with thumbnails (which are usually black squares since movies often start with a black screen), I can now browse my videos by genre and by using the poster cover art. It’s a much better way to work with your movie collection.

I’ll still be tweaking with my HTPC for several weeks I’m sure, but for the moment I am extremely satisfied and impressed with how easy this was to put together. Stop watching video on your teeny-tiny computer screen – get yourself a nice HTPC today!  :-)

HTC Touch As An iPhone Replacement

In January of this year (2008), I was getting the itch to buy something to replace both my cell phone and my Palm PDA (I used the Palm as my MP3 player). I wanted an all-in-one solution so I wouldn't have to carry around two pieces of technology with me. Being the geek that I am, I was naturally interested in the iPhone but at that time it still wasn't available in Canada, and wouldn't be available at all through my preferred mobility provider which uses a CDMA network. So entranced was I by this new Apple technology that I seriously considered getting an iPod Touch and just keeping my old cell phone. I'd still have two items to carry around, but at least one of them would be super-cool.

In doing some research, I came across an interesting Windows Mobile phone from HTC called the "Touch." It has no number or key pad and instead relies almost completely on its touch sensitive screen for controlling the device (much like the iPhone). Windows Mobile was never designed to be operated by a touch screen, so HTC includes this glitzy little interface called Touch Flo which is neat enough, but doesn't really replace the standard Windows Mobile interface. However, the real advantage of the HTC Touch was the network fee - it came with an UNLIMITED data plan for only $7 a month! For replacing my MP3 playing Palm, the HTC Touch uses a microSD card, so I could at least approximate the storage capacity of an 8GB iPod Touch.

So that's what I did. I bought the HTC Touch with an 8GB microSD card and I've been mostly happy with it. Unlike the iPod or iPhone, the HTC has both regular and stereo bluetooth, so I also bought a pair of Motorola S9 bluetooth headphones, which work and sound pretty darn good. So, things have been good - mostly. There are some serious downsides to this solution, however.

First, I've mentioned that Windows Mobile is not really a mobile OS designed for touch screens so it has nowhere near the usability or cool flashiness of the iPod Touch/iPhone. Second, the battery life is atrocious - with the bluetooth radio on all the time and playing MP3s for about an hour a day, the device won't last 24 hours without a recharge. These aren't deal-breakers mind you, just a little annoying. What is more than just a little annoying is the fact that the HTC tends to corrupt the contents of the microSD card every once in a while (a couple of times a month on average). This is a known issue all over several Internet forums, but no one knows why and a fix is not forthcoming from HTC. So I just live with it, growling when it happens and re-copying my MP3s over when it does. I expect this doesn't happen on the iPhone.

UPDATE (March 2, 2009): After upgrading the HTC Touch to Windows Mobile 6.1 (a free upgrade) several months ago, I haven't experienced a single SD card corruption issue.

The latest "gotcha" with using the HTC as an iPhone replacement is with my car stereo, or pretty nearly any car stereo that supports "MP3 players." Usually this means they have an AUX jack that you can plug your MP3 player's headphone jack into. And I suppose that works well enough - it's certainly way better than using a tiny FM transmitter to do the same thing. But if you're lucky enough to have a decent car stereo, it will also come with an iPod dock connector so that you can completely control your iPod through the car stereo system! Playlists, artist names and song titles show up right on the car stereo display, and the iPod can be controlled with the steering-wheel radio controls, if your car is so equipped (mine is). But, if you don't have the magical device from Apple, all you've got is the AUX jack. Boo. (I understand that come vehicles also let you plug in any USB memory stick that contains MP3s and will work much like the iPod control - alas, my car does not have this option.)

So, do I buy an iPod so I can have the convenience (and cool) factor when I'm in my car (not to mention the absence of corrupted memory cards)? Do I go back to having two items of technology to lug around? I'm very tempted, though having everything on my cell phone is mighty handy. I suppose if it was really worth it to me, I could dump the HTC and switch mobility providers and get the iPhone now that it's available in Canada. Honestly though, paying $75 a month for the privilege isn't really enticing, not to mention the early termination fee I'd face with my existing provider. But I still have my eye on a new iPod. :)

Car Dilemma Solved - Mostly

I few months ago, I blogged about my search for a replacement vehicle as my existing lease was about to end. My lament was that I absolutely loved my existing car (a 2004 Volvo XC70) but that the lease buyout price was far too high given its current market value. And leasing a new version of the same model was somewhat price-prohibitive. I began an exhaustive search for a replacement, but nothing else really caught my fancy.

So, on the lease termination date I returned my car. It was a sad moment - tears were shed (well, no, not really). I was going to wait until the following month to see if Volvo's lease deals were more attractive, though if you've been following the news in the automotive industry lately you'll know that almost every car manufacturer is taking a bath on lease returns lately, with most manufacturers either raising their lease rates or discontinuing their lease programs altogether. This option was looking like a long shot and, as it turned out, it was.

So I starting looking at used XC70s to replace my returned 2004. A few days after I returned the car, I saw a gem of a car online. It was a nice 2004 XC70 and the price was agreeable. I dug a little deeper and realized that this was my car that I had just returned! So I scooted on down to the local Volvo dealer and said I'd like my car back, please.

Me and my precious 2004 XC70 have been reunited. However, it's making some odd noises now - perhaps a new Volvo is still in my future after all.