Friday, January 23, 2009

Putting your eggs in one SATA basket

Dylan Beattie lamented over on his blog about RAID and it's problems. I, too, had experienced problems in the past and discovered some of the same issues. I have two possibilities that seem the least offensive - use *Windows* RAID which theoretically is not as fast as hardware-based but is portable to another running Windows machine. No longer are you tied to a particular board manufacturer's unique RAID solution.

In my case, I started with RAID 5 using the on-board controller from an NVidia N-Force motherboard only to have it keep crapping out. Finally, I went out and bought an Adaptec controller, backed up what I had, reformatted the drives for the new controller and then loaded on the data. After two more partial failures in two months I was at my wits end regarding RAID when I found a mention of using Windows-based RAID and freeing yourself from hardware tie-in. I knew that if something had happened to the Adaptec controller, I'd need to get another one in order to get access to the data "locked" in it's proprietary RAID scheme. With Windows, I was even able to move the drives from a Windows 2003 server over to a Windows 2008 server without losing data. Why? It's still NTFS and it's still their RAID system so any Windows server will do. The only downside I've found it that Microsoft only "exposes" RAID on their server products so if you're a real hard-core developer who installs server instead of workstation (oops, professional, no business!) then this is a workable solution for you.

What's more liberating than the RAID route is using virtual machines to develop with. I first began using them in 2003 when Microsoft acquired Connectix and began offering v5.5 for download on MSDN. Now, my "normal" machine configuration is a laptop with "desktoppy" applications - Office, Paint.NET, readers, tools, etc. along with virtual machines that have only the minimum required development tools. The development VM is not longer tied to a physical machine and backing it up is simply a matter of copying/compressing the *.vhd disk image file. If you leave a large-capacity USB drive attached to your workstation, you can make backups at will or at scheduled intervals. Wanna try the new VS2008? Maybe the VS2010 CTP? No problem! Just spin up a new VM and install there - no harm and nothing else is affected. If you go this route, I suggest you install a base image of Windows (XP SP3, Vista SP1, Server, whatever) and then shut down and back it up! Now, if you wish to try a new tool/environment then you restore/rename the backup and simply start installing the tools you wish. Note that I always use NewSID to both generate a random system ID and rename the machine.

Friday, January 9, 2009

First NH Code Camp Announced

Chris Bowen has just announced the first ever NH Code Camp on Feb. 28th...



"A Code Camp is a free, community-driven event that offers a great way to meet, share with, and learn from fellow developers in the local area.  They began right here in New England and follow the basic ideas in the Code Camp Manifesto.

.NET Adventure - Data Layer (part 3)

This post is part of a series on building a complex .NET application from scratch. In Part 2, I introduced the Enterprise Library configuration and as well as unit testing with NUnit. Today I'm going to cover static code analysis using FxCop and add NAnt to the mix.

NAdv.FxCop.rules FxCop is a free tool that will analyze your code using a comprehensive set of rules grouped into nine different categories shown on the right. When I first threw together the code for this project and ran FxCop it immediately showed me all the bad habits I had - improper casing of fields vs. properties and method arguments, etc. Within a couple of minutes I had refactored things and brought the list down to just one - not having my assembly signed which I can live with for now. In fact, running it against a large project I'm involved with at my day gig turned up 450+ errors and warnings! To be fair, that project has had a lot of cooks in the kitchen and everyone has their idea of the One-True-Way™ when it comes to coding styles ;) That's the beauty of FxCop - it levels the playing field by not being emotional or stuck-in-the-mud about how things should be. The default rules it ships with are the accumulated best practices and standards employed by the Microsoft .NET engineers - talk about going straight to the source! Of course, you can edit the rules to change or disable ones you disagree with - however, my pragmatic side opted to conform so they're all on.

NAdv.FxCop.results FxCop runs as a stand alone GUI where you simply reference "target" DLLs for it to analyze - I think of them as victims which it will mercilessly rip through exposing all their weaknesses. It also ships with a command line driver called FxCopCmd if you wish to invoke it from the command line - more on that in a moment. The main window is shown on the right with my one lonely violation remaining. Step back for a moment and consider what we have...a free tool with hundreds of pre-installed rules that will analyze your code, report the results complete with detailed information about the problem so you can write better code. 'Nuff said.

So far we've got a Visual Studio project for data access and two free, open source tools to make our code better, our lives easier and reduce the federal deficit. Ok, maybe not the last one. One issue us lazy developers need to overcome is...well, being lazy. Right now, we've got to remember to run these tools. But, wait! Before you pick up that phone to call we've got one more special gift for you...

NAnt is a free .NET build tool originally based on Apache's Ant tool for Java. It uses an XML configuration file to define build projects that can be executed from the command line. Where it shines is that we can continue to develop/debug/compile in the Visual Studio IDE but when we think we're ready to go we can use NAnt to run all the steps and tools without batting an eye and both hands tied behind its back. NAnt uses the notion of targets and tasks and has...you guessed it, a bunch of tasks already set up. For our purposes, it can launch MSBuild to compile the project the same as Visual Studio does, then launch NUnit to execute all our unit tests and finally run FxCop to make sure our code is clean, polished and ready for showing off.

Since NAnt is a mature tool there's plenty of information to be found by searching around and even a good book called Expert .NET Delivery Using NAnt and CruiseControl.NET if you really want to delve in. Rather than going through the scripts line-by-line I'm going to hit the high points and techniques I use.
Organization:
  • Use a nant.cmd batch script to handle validation, setup and avoid having to remember the command line switches
  • NAdv.NAnt.invokeUse a main (or "driver") build script, called NAdv.build that sits in the solution root folder. Note that many folks like to change the default extension to .xml since it is in fact an Xml file. I prefer the default .build extension as I know instantly that it is the NAnt build script and that it is unique and special, not just some dumb old Xml file laying around in the folder tree.
  • The main build script should setup all the properties and paths then use the <nant...> task to drive individual build scripts in the project subdirectories (e.g. NAdv.DataLayer.build). This gives you cleaner organization and manageability as well as allowing each project to be tailored accordingly.
  • NAdv.NAnt.helpWithin a build script make a target called "help" AND MAKE THIS THE DEFAULT. An example output is shown on the right. <rant>I *hate* it when someone sets up an environment where you simply type in "nant" and all kinds of things start happening while the console scrolls by faster than the blink of an eye. I always think to myself "Uh-oh, what just happened. Did it build? Did it deploy and overwrite something?"</rant> It is much safer to show the help settings when someone types in "nant" and make them use an explicit build target (e.g. "nant build" or "nant deploy") - an intentional choice of what they wish to do.
  • NAdv.NAnt.execMake "main" target names a single word, e.g. clean, build, deploy, and have these targets call "internal" sub-targets that are compound words, e.g. build.init, build.compile, deploy.web, deploy.database.
  • Don't launch NUnit using NAnt's built-in <nunit2> task - use the generic <exec> task to run NUnit's console. NAnt and NUnit are different projects on different release schedules and NAnt is compiled against the version of NUnit that was available at the time it was built - NUnit 2.2.8 at the time of this writing. Using <exec> allows you to run the latest version of NUnit directly the same as when you do so interactively from the Visual Studio IDE.
One issue that comes up attempting to use the <msbuild> task to driving compiling the projects is that an Import entry is referenced incorrectly:
   1:  Change the following line in *.csproj:
   2:    <Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" />
   3:  To the following:
   4:    <Import Project="$(MSBuildBinPath)\Microsoft.CSharp.targets" />

Here's what the output of running "nant fxcop" looks like on the console:
NAdv.Nant.fxcop

The FxCop results can be directed to an Xml file that contains formatting which looks like this:
NAdv.FxCop.results-xml

The code for this version of the project can be downloaded from here. Note a couple of changes - the build output has been "tweaked" to NAdv_vX.X\bin which is "above" the source and test trees and the NAnt, NUnit and FxCop configurations/settings files are now included. If you unpack the zip file then open a command prompt in NAdv_vX.X (1.2 in this case) you should be able to execute "nant build, nant fxcop, and nant test" to execute all three targets. If you have problems, first check the console output and the log files and leave a comment if you're still stuck.

.NET Adventure - Data Layer (part 2)

This post is part of a series on building a complex .NET application from scratch. In Part 1, I introduced the Active Record pattern used to create data layer components. Today I'm going to wrap-up what I started by covering the Enterprise Library configuration, unit testing with NUnit and static code analysis using FxCop.
Microsoft's Enterprise Library "is a collection of reusable software components (application blocks) designed to assist software developers with common enterprise development challenges." Referring to the previous Common Application Architecture diagram, the "EntLib" would be placed on the far right as a "cross-cutting" technology. To begin with, I'm using it solely to provide low-level data access. At a minimum, you need to add a project references to the EntLib components, configure a data source and add code to use it.

For the data access references there is a standard Common library, a utility library called ObjectBuilder2, and the Data Access library:
NAdv.DataLayer_entlib.refs

Note that ObjectBuilder2 is new with version 4.x and was delivered as part of the Unity application block - more on that later. EntLib installs a custom Visual Studio designer as well as a standalone configuration editor. However, since we're using the Express Editions of Visual Studio and add-ins are not supported (hey, it's free!) the integrated designer doesn't work but we can use the standalone editor as shown here:
NAdv.EntLib_config 

Simply launch the standalone editor from the Start menu shortcut and open the configuration file. Because .NET's app.config and web.config files can become "busy" with too many options, I prefer to use the EntLib feature of storing it's configuration in a separate file called "EntLib.config" that is referenced from the app.config:
NAdv.DataLayer_app.config

Notice above that only one new section was added to app.config and in that section is the reference to the standalone EntLib configuration file - nothing else is needed there.

For completeness, here's the full EntLib.conf file as it exists at the moment:
NAdv.DataLayer_entlib.config 

To test this code we can turn to NUnit. Again, I'll say the goal of this series isn't to drill so deeply in one technology but rather to pull together various technologies into an end-to-end working application. Marc Clifton wrote a series of articles on Unit Test Patterns that is a good introduction to just how far one can go in designing and preparing tests. Suffice to say that this is a topic unto itself and that the tests I've shown are far from complete - more along the lines of simple black-box pass/fail tests. My goal was to lay the groundwork for testing the data layer and show how the layered architecture lends itself to this kind of quality, professional development that delivers resilient code. Using a testing framework/harness and some code you can immediately exercise the layer with relatively little effort.

NAdv.UnitTests.referencesSince NUnit is a test harness or "driver application" that runs tests, you simply need to create standard .NET class libraries with classes that publicly expose the tests you wish to make available. As a starting point, you should at least write a test for each publicly exposed method that can be called externally. The testing assembly needs a reference to the NUnit.Framework assembly as well as your own projects that you'll be testing as shown here on the right.

You'll notice that I've also created an app.config and EntLib.config file here in the unit test assembly. While testing code, this assembly will be the controlling application so any references to configuration data will be found here in this project and not in the "lower-level" project being tested - the NAdv.DataLayer in this example. In fact, the configuration examples shown above were actually from the UnitTests project and not the DataLayer project!

Here's some simple code to test adding a new customer:
[TestFixture]
  public class CustomerTest
  {
    [Test]
    public void AddCustomer()
    {
      Customer customer = new Customer();
      SetCustomerValues(customer);
      int customerId = customer.Store();
    }

    ... <snip> ...
     
    private void SetCustomerValues(Customer customer)
    {
       string timestamp = DateTime.Now.Ticks.ToString();

       customer.CompanyName = "AdventureWorks Unit Test " + timestamp;
       customer.EmailAddress = timestamp + "@adventureworks.com";
       customer.FirstName = "Unit";
       customer.LastName = "Test";
       customer.MiddleName = "A.";
       customer.NameStyle = NameStyleValue.Western;
       customer.PasswordHash = "L/Rlwxzp4w7RWmEgXX+/A7cXaePEPcp+KwQhl2fJL7w=";
       customer.PasswordSalt = "1KjXYs4=";
       customer.Phone = "1234567890";
       customer.Salesperson = "Salesperson " + timestamp;
       customer.Suffix = "Jr.";
       customer.Title = "Mr.";
    }
  }

The [TestFixture] and [Test] attributes are NUnit's way of marking a class and method, respectively, to indicate that they are test code which it should execute. The AddCustomer() method is public, accepts no parameters and returns nothing to the caller (NUnit). The essence of the test is to construct an instance of the Customer class, set all the properties, and call the Store() method. While this is a contrived, hard-coded example it does show how little effort it takes to begin writing unit tests.

Because we're using the Express Editions of Visual Studio, some functionality is limited including the ability to configure custom debugging settings. In fact, the default installation shows a "basic" set of configuration settings on a project. You must use Tools | Options on the VS menu then select "Projects and Solutions" and check the "Show advanced build configurations" to be able to switch from Release to Debug builds or see additional project configuration options. Even after doing this you won't be able to specify the NUnit GUI as the application to launch when debugging. However, if you add the following two lines to the *.csproj file for your unit test project:
<StartAction>Program</StartAction>
<StartProgram>C:\NUnit 2.4.8\bin\nunit.exe</StartProgram>
within the <PropertyGroup> for the Debug configuration you'll be able to press 'F5' to start debugging and the NUnit GUI will launch. Now, because the Visual Studio debugger has launched you can set breakpoints in your code and step through it just like the "big boys" do using Professional and Team editions.

NAdv.UnitTests.DataLayer_run

Above is the NUnit GUI showing the unit tests I've set up and run. Of course, the Zen of unit testing and TDD requires you to design your tests and APIs before ever writing a line of code - I won't lie to you and say I wrote these tests beforehand ;)

Since this post is already getting long, I don't want to set a trend so I'll stop here and discuss FxCop in the next post. The code for this version of the project can be downloaded from here.

Monday, January 5, 2009

.NET Adventure - Data Layer (part 1)

Rather than spending the time drawing ugly stick men so to speak (I can't draw very well) I'm going to refer to the Common Application Architecture shown in Chapter 3: Architecture Design Guidelines of Microsoft's patterns & practices Application Architecture Guide 2.0 (The Book):

RefAppArch

I plan on working my way from the bottom up. It's possible to take a different approach - a BDUF will allow multiple teams to work in parallel assuming the contracts between the layers are well-defined. For a data source I'll be using the Adventure Works LT sample database because it's a simple design and comes both pre-built and pre-populated. You can find documentation, the downloadable installers and a schema diagram for the database here on CodePlex.

Originally I installed AWLT on my local SQL 2008 Express instance however I kept getting "Must declare scalar variable..." errors using the technique I'm about to discuss. Since I have a full SQL 2005 Standard Edition instance running on another machine, I simply loaded the appropriate AWLT 2005 version onto it then changed the connection string and I was cooking. I didn't change anything in my code so one has to wonder if the two versions of the database are different or if the database servers behave differently. Having wasted over an hour chasing around trying to fix the error, I decided to pass for now and use the SQL 2005 instance. The goal isn't to become a SQL DBA here or study/debug the nuances of SQL 2008 Express but rather to explore the various layers of a complex .NET application in a pragmatic, approachable way.

I'm going to use the Active Record pattern because it allows flexibility that will be explored later on. An alternate and well-known approach is to generate strongly-typed DataSets using the tools in Visual Studio instead. You'll find information here and here as well as plenty more examples of that approach simply by searching. With the Active Record pattern used here, a data access class is built that models in code a single table. Essentially the class contains public properties corresponding to the columns in a table and methods for CRUD behavior of that data.

Creating an instance of an Active Record class means it will have one of two states: either with existing data loaded from storage or a new instance with default values (data does not exist in storage). Therefore, the constructors for the class are pretty straightforward:

  public Customer()
{
InitializeData();
}
public Customer(int customerId) : this()
{
Load(customerId);
}

As you can see the default constructor creates an empty instance by calling InitializeData() while passing an identifier, customerId in this case, to the constructor causes the class to Load() the instance data from storage.


As mentioned, this class needs properties that correspond to the columns in its underlying table. A few properties are shown here:

  private System.Int32? _customerId;
public System.Int32? CustomerId
{
get { return _customerId; }
set { _customerId = value; }
}

private NameStyleValue _nameStyle;
public NameStyleValue NameStyle
{
get { return _nameStyle; }
set { _nameStyle = value; }
}

private System.String _title;
public System.String Title
{
get { return _title; }
set { _title = value; }
}

For the purposes of this tutorial the properties are simple values that hold data. Of course, business requirements and local development standards may dictate that validation be included as well as other nifty things. Note that the first property shown, CustomerId, is nullable. This is a simple technique to indicate whether or not the instance contains new data that has never been written to the database (null value) or previously stored data that has been retrieved (not null). Remember, primary keys should not be nullable so this approach is sufficient. If you prefer to use GUIDs then simply make the data type System.Guid? instead.


The Load() method is called by one of the constructors to retrieve the instance data from underlying storage and populate the properties as shown here:

  public void Load(int customerId)
{
string sqlCommand = "SELECT " + _columnNames + " FROM Customer WHERE CustomerId=@CustomerId";

Database db = DatabaseFactory.CreateDatabase(Settings.Default.DatabaseName);

DbCommand dbCommand = db.GetSqlStringCommand(sqlCommand);

db.AddInParameter(dbCommand, "CustomerId", DbType.Int32, customerId);

using (IDataReader dataReader = db.ExecuteReader(dbCommand))
{
if (dataReader != null && dataReader.Read())
{
PopulateData(dataReader);
}
}
}

At this point, I must take a detour from the code at hand to discuss the underlying data access logic. You can embed ADO.NET code directly in your data layer classes, or better yet put that logic into either a utility/helper class or a base class that all the data layer classes inherit from. You'll note that the p&p's architecture diagram shows these as additional "Data Helpers / Utilities". An alternative to writing all that low level plumbing is to use code that is already written, well tested, and vetted by hundreds of other developers which is what I've done. I'm using the pattern & practice group's Enterprise Library to provide the plumbing. Before you groan because of past experiences or snide comments you've heard from others I suggest you take another look at "EntLib". I've watched it "grow up" from the application block days of early .NET and it's become a mature, well-rounded, adult member of our little society. A lot of the dependencies that plagued earlier versions and forced an all-or-nothing approach have been removed. In fact, for this version of our .NET Adventure I'm only using the Data Access Application Block and nothing else. In future posts, I intend to swap out EntLib for other approaches to illustrate that it can be done and that the design techniques I've shown here make it feasible to do so.


Looking back at the code, notice that I'm using embedded SQL statements. I don't wish to engage in the well-worn debate of dynamic SQL vs. store procedures so please let's agree to get along - if I lose a few readers at this point, so be it. One of the reasons I used embedded SQL is because of SQL Compact Edition (SQLCE). As a light-weight, embedded database it has some compelling features but the tiny size comes with a price - it does not support stored procedures. Had I used stored procedures, I would have closed that door never to be opened again without a lot of effort. Not to mention that the various dialects of database code means that stored procedures are difficult to translate - I'm speaking from experience having written many procedures for SQL Server, Oracle, and DB2 which are conceptually similar but strikingly different in syntax and implementation. The dynamic SQL code shown above should support embedded databases as well as others vendors with little or no changes.


I won't go into the basics of the Enterprise Library classes - you can find plenty of examples online. The essence of the Load() method is that it takes a customer Id as a parameter, constructs an SQL SELECT statement filtered with a WHERE clause specifying the customer Id passed and uses a data reader to retrieve the row. If the row was retrieved (e.g. the customer exists) then the PopulateData() private method is called. You may wish to do something different such as throwing an exception or coding the method with a return value used to indicate success or failure. Also note that "_columnNames" used in building the SELECT clause is a string constant containing a comma-separated list of the column names.


PopulateData() simply copies the values returned in the DataRecord into the class instance properties:

  private void PopulateData(IDataRecord dataRecord)
{
_customerId = (int)dataRecord["CustomerId"];
_nameStyle = (dataRecord.GetBoolean(1)) ? NameStyleValue.Eastern : NameStyleValue.Western;
_title = (dataRecord["Title"] == DBNull.Value) ? null : (string)dataRecord["Title"];
_firstName = (dataRecord["FirstName"] == DBNull.Value) ? null : (string)dataRecord["FirstName"];
_middleName = (dataRecord["MiddleName"] == DBNull.Value) ? null : (string)dataRecord["MiddleName"];
_lastName = (dataRecord["LastName"] == DBNull.Value) ? null : (string)dataRecord["LastName"];
_suffix = (dataRecord["Suffix"] == DBNull.Value) ? null : (string)dataRecord["Suffix"];
_companyName = (dataRecord["CompanyName"] == DBNull.Value) ? null : (string)dataRecord["CompanyName"];
_salesperson = (dataRecord["SalesPerson"] == DBNull.Value) ? null : (string)dataRecord["SalesPerson"];
_emailAddress = (dataRecord["EmailAddress"] == DBNull.Value) ? null : (string)dataRecord["EmailAddress"];
_phone = (dataRecord["Phone"] == DBNull.Value) ? null : (string)dataRecord["Phone"];
_passwordHash = (dataRecord["PasswordHash"] == DBNull.Value) ? null : (string)dataRecord["PasswordHash"];
_passwordSalt = (dataRecord["PasswordSalt"] == DBNull.Value) ? null : (string)dataRecord["PasswordSalt"];
}

One thing to note is that the _nameStyle property is using an enumeration called NameStyleValue which is included in the class and shown here:

  public enum NameStyleValue
{
Western,
Eastern
}

The AdventureWorks database uses a bit column to store the style of the name - Western (First Last) or Eastern (Last, First) so this enumeration is a convenience to make the property easier to work with in code.


To store the instance data into the backing storage system we expose a simple method called Store() shown here:

  public int Store()
{
if (_customerId == null)
return StoreNew();
else
return UpdateExisting();
}

Notice the test of the _customerId property to see if it's null - if so, we store a new instance and if not we update an existing instance. Both StoreNew() and UpdateExisting() are internal (private) methods which callers cannot access. Not only does this make the external, public interface to the Active Record class simple but it hides the internal details of what determines "newness" and the different ways the class needs to handle this. Conversely, exposing StoreNew() and UpdateExisting() means the burden is on the calling code to understand the nuances and invoke the proper method which makes the coupling between the data layer and above more brittle.


Let's take a look at the StoreNew() method shown here:

  private int StoreNew()
{
// construct insert statement

StringBuilder sb = new StringBuilder();
sb.Append("INSERT Customer (NameStyle,Title,FirstName,");
sb.Append("MiddleName,LastName,Suffix,CompanyName,SalesPerson,");
sb.Append("EmailAddress,Phone,PasswordHash,PasswordSalt) VALUES (");
sb.Append("@NameStyle,@Title,@FirstName,@MiddleName,@LastName,");
sb.Append("@Suffix,@CompanyName,@SalesPerson,@EmailAddress,");
sb.Append("@Phone,@PasswordHash,@PasswordSalt");
sb.Append(")");

// execute insert

Database db = DatabaseFactory.CreateDatabase(Settings.Default.DatabaseName);

DbCommand dbCommand = db.GetSqlStringCommand(sb.ToString());

db.AddInParameter(dbCommand, "NameStyle", DbType.Boolean, _nameStyle);
db.AddInParameter(dbCommand, "Title", DbType.String, _title);
db.AddInParameter(dbCommand, "FirstName", DbType.String, _firstName);
db.AddInParameter(dbCommand, "MiddleName", DbType.String, _middleName);
db.AddInParameter(dbCommand, "LastName", DbType.String, _lastName);
db.AddInParameter(dbCommand, "Suffix", DbType.String, _suffix);
db.AddInParameter(dbCommand, "CompanyName", DbType.String, _companyName);
db.AddInParameter(dbCommand, "SalesPerson", DbType.String, _salesperson);
db.AddInParameter(dbCommand, "EmailAddress", DbType.String, _emailAddress);
db.AddInParameter(dbCommand, "Phone", DbType.String, _phone);
db.AddInParameter(dbCommand, "PasswordHash", DbType.String, _passwordHash);
db.AddInParameter(dbCommand, "PasswordSalt", DbType.String, _passwordSalt);

int rowsAffected = db.ExecuteNonQuery(dbCommand);
if (rowsAffected != 1)
{
throw new DataException("Failed to INSERT new Customer.");
}

// return customer id (more efficient to use sproc with SCOPE_IDENTITY() and out param)

string sqlCommand = "SELECT CustomerId FROM Customer WHERE FirstName=@FirstName "
+ "AND MiddleName=@MiddleName AND LastName=@LastName "
+ "AND CompanyName=@CompanyName "
+ "AND EmailAddress=@EmailAddress";

dbCommand = db.GetSqlStringCommand(sqlCommand);
db.AddInParameter(dbCommand, "FirstName", DbType.String, _firstName);
db.AddInParameter(dbCommand, "MiddleName", DbType.String, _middleName);
db.AddInParameter(dbCommand, "LastName", DbType.String, _lastName);
db.AddInParameter(dbCommand, "CompanyName", DbType.String, _companyName);
db.AddInParameter(dbCommand, "EmailAddress", DbType.String, _emailAddress);

IDataReader dbReader = db.ExecuteReader(dbCommand);
if (dbReader != null && dbReader.Read())
{
_customerId = (int)dbReader["CustomerId"];
}

return _customerId.Value;
}

In summary, this code constructs an SQL INSERT statement using the property values to pass as parameters, executes the statement and then immediately retrieves the row just inserted. This flies in the face of the earlier discussion regarding dynamic SQL vs. stored procedures and may be a compelling reason for you to go the stored procedure route. As soon as the INSERT statement executes there is a mismatch between the row stored in the database and the class instance in memory, specifically the customer id value. The AdventureWorks database uses a default constraint of NEWID() to generate the primary key value which we don't have visibility over. We cannot leave things this way - otherwise we'd have an instance in memory with no primary key while the row in the database has one. As we showed earlier, the primary key is used to determine "newness" - null means it's a new instance and at this moment it is still null. There are a couple of ways to solve this problem. One way is, of course, to switch from using dynamic SQL to using a stored procedure so that the logic inside the procedure can issue a SCOPE_IDENTITY() function call to retrieve the newly generated primary key and return it to the caller (our code here) as an out parameter. Another approach would be to define the primary keys as GUIDs which we could generate in our C# code within the StoreNew() method thus the instance data in memory and the row in the database would have the same value and be identical. Finally, it's worth noting that we could stick with dynamic SQL and wrap the two database calls inside a transaction so that other processes wouldn't "see" the newly inserted row until we read it back to retrieve the customer id. Once that was done we could commit the transaction and both the in-memory and in-storage versions would be the same.


UpdateExisting isn't worth showing here so I'll just describe it briefly. Rather than constructing an INSERT statement it constructs an UPDATE statement with a bunch of SETs for each column.


To recap, I've shown an implementation of the Active Record pattern that supports the following:



  • Exposes properties corresponding to the columns in the underlying database table that callers can manipulate.
  • Created two constructors - an empty (default) constructor that initializes a new instance and a second one that takes a primary key identifier and uses it to retrieve the instance values from a row in the underlying database table.
  • Created Delete (not shown), Load, and Store methods to enable CRUD behavior.

The next installment will be a bit shorter - I'll cover writing unit tests for this data layer class so we can quickly flush out any problems. In addition, I'll briefly mention FxCop and what it can do for us. In the future we will revise this Active Record class as the needs of the other layers drive us to do more.


The code for the project can be downloaded from here.

Friday, January 2, 2009

Building a complex .NET application

To start the new year off with a bang as well as initiate my new blogging home with something significant I'm going to post a series of articles on building a complex .NET application from scratch. The goal is to capture a set of techniques for an end-to-end solution. There are of course many different approaches and opinions on the proper way to build an application. I will attempt to provide a pragmatic solution with some alternative techniques.
This post will serve as the main placeholder and table of contents - the links here will be updated with each post.
  1. Introduction
  2. Data Layer
    1. Part 1
    2. Part 2
    3. Part 3
  3. Entity Layer
    1. Part 1
    2. Part 2
  4. Service Layer
  5. UI Layer
  6. Smart Client
  7. Web Client
Here is a list of all the products we'll be using during this "adventure":
Visual Studio 2008 Express Edition Free .NET IDE - if you've got the disk space, install the All-in-One DVD from the download page. At a minimum, you should install the Visual C#, Visual Web Developer, and SQL Server 2008 Express editions.
AdventureWorksLT A sample database schema based upon the Adventure Works Cycles scenario. ** NOTE: To install this you need to enable Full Text search and FileStream support when installing SQL 2008 Express.
Enterprise Library 4.1 A collection of reusable software components (application blocks)
NUnit 2.4.8 .NET unit-testing framework
NAnt 0.85 .NET build tool
FxCop 1.36 A rules-based application for analyzing .NET assemblies and reporting information.