Archive for February, 2012

Dynamic Data Access Layer, Part 2

Wednesday, February 29th, 2012

In Part 1 of Dynamic Data Access Layer, we created a class that was able to map between any Class and a DataRecord assuming the fields matched the properties.  In Part 2, we’re going to retrieve the data from the database and use the Mapper to populate the class.

We’ll of course start with declaring the class itself, interestingly when we actually use this class, we’ll have to not only inherit from the class, but the T will need to be the interface that the inheriting class implements.

In order for this class to do it’s work, it needs two things, a Microsoft.Practices.EnterpriseLibrary.Data.Database object and a Mapper class like the one from part 1 of this series. If the user of the class passes in a Mapper, we’ll use it, otherwise it will be created the first time the property is accessed (next code section). The user of this class has the choice to supply either the Database or the Mapper or both, this could be helpful in a Dependency Injection scenario. If they are not supplied, one will be provided for them.


public class SQLReaderWriterBase<T> where T : new()
    {
        Database db;

        #region Constructors
        public SQLReaderWriterBase(Database database, IMapper<T> mapper) : this()
        {
            this.db = database;
            Mapper = mapper;
        }
        public SQLReaderWriterBase(IMapper<T> mapper) : this()
        {
            Mapper = mapper;
        }
        public SQLReaderWriterBase(Database database) : this()
        {
            this.db = database;
        }
        public SQLReaderWriterBase()
        {
            if (db == null)
            {
                db = DatabaseFactory.CreateDatabase();
            }
        }
        #endregion

        

Next we create the properties for our class. While one of the main points of this exercise is so that the user doesn’t need to know anything about the database; their may be situations where it’s necessary to override defaults. Therefore, we allow the user to provide the name of the stored procedure to call if they need to through the GetCommandText property. We also provide a default which should handle 99.9% of the cases, in the format of spClassNameGet. Just as the Mapper could be passed into the Constructor, it can also be set through the Mapper property. However, if it’s not set through either method, we provide a default implementation using the SQLMapper we created in the first article.


#region Properties
        string getCommandText = "";
        public string GetCommandText 
        {
            get
            {
                if (getCommandText == "")
                {
                    getCommandText = "sp" + typeof(T).Name.ToString() + "Get";
                }

                return getCommandText;
            }
            set
            {
                getCommandText = value;
            }
        }

        IMapper<T> mapper;
        public IMapper<T> Mapper
        {
            get
            {
                if (mapper == null)
                {
                    mapper = new SQLMapper<T>();
                }
                return mapper;
            }
            set
            {
                mapper = value;
            }
        }

        #endregion

Now we’re starting to get down to the real fun of this class. Actually retrieving the data and populating the object. But first let’s talk about the allowed methods:

  1. Using an identity field as the key, which should only require a single integer to be passed in
  2. Unknown number of parameters passed in a comma separated format ie SQLReaderWriterBase.Get(parameter1, parameter2, …);

So let’s start with the simplest case, and create a Get that takes a single integer representing an identity field and identifies the row to retrieve and return an object for. Rather than do any real work, let’s just pass on the necessary info to another version of get to actually handle. If you notice that it passes in "ByID", that would be due to the fact that our actual implementation requires the first parameter to be the string to attach to the end of the name of the stored procedure to run. While the majority of times data will be retrieved ByID, their will be other cases where retrieval will be done on other criteria, their may be ByName or ByEmail or ByNameEmail. This design allows for unlimited variations, but puts the burden of knowledge of the database back onto the calling class.


 public ObservableCollection<T> Get(int ID)
        {
            return Get("ByID", ID);
        }

So the next version of Get we create will handle the situation of an unknown number of parameters being passed in. Fortunately, C# gives us the “params” keyword which basically means take the comma separated values and turn them into an array: SQLReaderWriterBase.Get(param1, param2, param3) becomes an array of objects that we can work with. We then convert this array of objects to a LinkedList and pass off handling it to another version of Get.


        public ObservableCollection<T> Get(params object[] parameterValues)
        {
            LinkedList<object> listParameterValues = new LinkedList<object>(parameterValues);

            return Get(listParameterValues);
        }

So we still haven’t actually done anything really, the prior two versions of Get simply provide easy ways for users of the class to call into it. Now we’re actually going to do the real work of getting the data from the database and populating the class(es).

The very first thing we’re going to do is to create our return collection and then use the first parameter to modify the command text we’ll use to discover the parameters. If you remember from above, since we potentially have many ways of retrieving data for this class, we need to some way to differentiate between them and so the first item in the parameters linked list must be a suffix for the command text which clarifies which stored procedure to use.


        public virtual ObservableCollection<T> Get(LinkedList<object> parameterValues)
        {
            ObservableCollection<T> collection = new ObservableCollection<T>();

            LinkedListNode<object> currentParameter = parameterValues.First;
            SqlCommand sqlCommand = new SqlCommand(this.GetCommandText + currentParameter.Value.ToString());
            sqlCommand.CommandType = CommandType.StoredProcedure;

            db.DiscoverParameters(sqlCommand);

Now that we have the parameters for the given stored procedure, we’ll loop through all sql parameters and set their values based on the passed in parameters, assuming that they are both in the same order.


            foreach (SqlParameter parameter in sqlCommand.Parameters)
            {
                if (parameter.ParameterName != "@RETURN_VALUE")
                {
                    currentParameter = currentParameter.Next;
                    if (currentParameter != null)
                    {
                        parameter.Value = currentParameter.Value;
                    }
                }
            }

Now that we have the parameters set, we’ll simply call ExecuteReader to get the data and then allow the Mapper class to do it’s job. Now we can return the collection of mapped objects.


            using (IDataReader reader = db.ExecuteReader(sqlCommand))
            {
                try
                {
                    collection = Mapper.MapAll(reader);
                    return collection;
                }
                catch
                {
                    throw;
                    // todo: consider handling exeption here instead of re-throwing, if graceful recovery can be accomplished
                }
                finally
                {
                    reader.Close();
                }
            }
        }
    }

Just like that we have a base class that can be inherited from and used to populate a class with data from a sql server database without worrying about writing any data access code.

In the next article, we’ll extend this class by adding in the Writer portion of SQLReaderWriterBase and allow for updates and inserts.

For Versus ForEach Versus Generics With Lambda

Tuesday, February 21st, 2012

A fairly common question in programming with C#.NET is what is the best way to iterate through a collection of objects. I personally lean towards ForEach for three reasons:

  1. Syntax looks cleaner, easier to follow
  2. Removes possibility of common errors, such as out of bounds errors
  3. Makes it clear that the process is occurring on all elements in the collection

In Part 1 of Dynamic Data Access Layer we had the following code:


foreach (PropertyInfo property in properties)
{
if (property.Name == fieldName)
{
property.SetValue(instance, record[counter], null);
break;
}
}

Which using a Lambda Expression, we can simplify to two lines of code:


PropertyInfo property = properties.Single(pi => pi.Name == fieldName);
property.SetValue(instance, record[counter], null);

Of course, the foreach version will simply do nothing if the property matching the fieldName is not found, but the Single method on PropertyInfo will throw an exception, if not found. This forces us to actually choose what we wish to do in the case of their being a field which does not have a matching property.

But the interesting part of this is the Lambda expression which we’re using:


pi => pi.Name == fieldName

The easiest way to get used to this syntax is to take a look at what it would be without Lambda Expressions:


bool IsMatch(PropertyInfo pi, string fieldName)
{
return pi.Name == fieldName
}

So in the one line of code, we’re asking for each and every PropertyInfo object within the collection of properties to be passed into the function “IsMatch” and give us back only the PropertyInfo object which meets the criteria. The Lambda Expression simply gives us a very succinct way to do this in one line of code. Although it may seem like we’re sacrificing readability by doing this, once you get used to this new notation, it’s very easy to follow. Check out Lambdas, Generics and Delegates. Oh My!!! for more information on Generics and Lambdas.

As far as performance goes, I have yet to see anyone prove definitively that their is any significant difference between For, ForEach and IEnumerable Extensions with Lambda Expressions. Of course, if you are one of the few who are working on a section of code with significant performance requirements, I would strongly recommend setting up as close to real world as possible timing trials to see if you have any significant difference in performance.

Dynamic Data Access Layer, Part 1

Monday, February 20th, 2012

In the article Dynamically Generate A Class From An Interface, I discussed extending what was built by having the dynamically generated class inherit off a base class that will handle not only reading data from the database to populate the class, but also updating the database.

But in order to create this class to be inherited from, we’ll need a class that is able to map between properties for a class and fields in a DataRecord.  So let’s start their:


public class Mapper<T>
{

We start simply with the Mapper class which can accept a generic T.  We then move on to creating a Map function which is able to take a single DataRecord and create an Instance Of T and return it.


protected T Map(IDataRecord record)
{
T instance = new T();

The first thing we’ll need to do is use Reflection to get information about the properties T has.  We are simply able to ask T’s Type what properties it contains:


PropertyInfo[] properties = typeof(T).GetProperties();

Now that we have a collection of PropertyInfo objects, one for each property, we’ll loop through the Fields in the DataRecord.  Unfortunately, IDataRecord does not contain an easy way to loop through using foreach, so we’ll resort to a for loop and counter.


for (int counter = 0; counter < record.FieldCount; counter++)
{

With each field we’ll make sure their is actually a value before bothering to map it.  In some cases, it is necessary to note the fact that the field is DBNull and you’ll need to modify this to handle those situations.


if (record[counter] != null &amp;&amp; record[counter] != System.DBNull.Value)
{
string fieldName = record.GetName(counter);

Next we’ll get the property which has a name that matches the field we have currently.


PropertyInfo property = typeof(T).GetProperty(fieldName);

Once we have a match between the property and field, we now will ask the property to set it’s value with the value in the current field in the current record:


if (property != null)
                    {
                        property.SetValue(instance, record[counter], null);
                    }
}
}
}

return instance;
}

That really is all we have to do.  We simply find the property and the field that have the same name and ask the property to set it’s value based on the field.  This will map a single record to a single instance of a class.

Next we need to create a method that will take a DataReader, loop through all records within it and map each individual record to an object instance.  Because this method uses the Map function to take care of the heavy lifting for each individual record, it’s actually very simple and self-explanatory:


public ObservableCollection<T> MapAll(IDataReader reader)
{
ObservableCollection<T> collection = new ObservableCollection<T>();

while (reader.Read())
{
try
{
collection.Add(Map(reader));
}
catch
{
//todo: consider handling exception here instead of re-throwing,
//graceful recovery can be accomplished
throw;
}
}

return collection;
}
}

While this class would do the job, it does have one glaring weakness.  If you were to have a 100 records being mapped into 100 objects, it would retrieve the PropertyInfo collection 100 times using Reflection.  This might be worth caching.

So this is the first step in building a Dynamic Data Layer, but it doesn’t do much on it’s own.  In Part 2 of this series (coming soon), we’ll dive into the next piece which retrieves the data from a database and uses this class to actually map it to the class.

Lambdas, Generics and Delegates. Oh My!!!

Wednesday, February 15th, 2012

Lambdas, Generics and Delegates have become some of my favorite programming tools (along with reflection).  It’s unfortunate that their use seems to violate one of the underlying principles I have based my career on; a colleague summed it up nicely with “the belief that a solution reaches a better place when it is as simple as possible”.

So the challenge becomes how to bring these techniques out of the domain of the senior programmer and have them be accessible to a junior audience.  I’ll start by saying that I don’t believe they are that complicated of topics, it’s just that much of what is written about them revolves around the theoretical rather than the practical implementation view point.  So I’m going to attempt to bring them back down to earth.

Generics
Microsoft states it as “Generics allow you to define type-safe classes without compromising type safety, performance, or productivity.”  I know this makes it as clear as mud.

Let’s take a different approach at defining Generics. Let’s say you wanted to create a class that could be applied to multiple “kinds” of data. You don’t care whether the programmer instantiating your class wants to use it with integers, strings or another class you’ve never heard of. The old way would be to simply use the object type, but a few problems would arise:

  1. Type Safety – no guarantee all objects would be the same type
  2. Performance – Each object would need to be “boxed/unboxed” when referenced
  3. Productivity – In order to solve 1 & 2, you would create type specific versions of the class, impacting productivity and creating maintenance issues.

Generics make this unnecessary. Here is the outline for one of my favorite classes that I’ve written using generics:


public class SQLMapperBase<T>
{
protected T Map(IDataRecord record)

public ObservableCollection<T> MapAll(IDataReader reader)

}

I’m sure that taking a look at the definition for this class is very confusing at first.  I’ve left out the actual implementation code as it would just confuse things further and is the subject of a future article I’m working on.  The point of this class is simply to take any object and map its properties to the fields in a record from a database.  Forgetting about how the class actually accomplishes this mapping (reflection), the beauty of this is that I as the developer of this class do not need to know anything about the classes it will be used with.  The person using the class simply needs to make sure that the properties of the class “passed in” as T when the class is instantiated match the fields in the DataReader to be used.

Still not quite following?  It should be easier to follow when we look at the actual usage of an instance of this class:


SQLMapperBase<Customer> customerMapper = new SQLMapperBase<Customer>();
SqlDataReader customerReader = GetCustomersByCriteria("LastPurchaseDate > 1/1/2012");
ObservableCollection<Customer> customers = customerMapper.MapAll(customerReader);

So we’re instantiating a SQLMapperBase object and telling it that it’s going to work with objects of class Customer.  We then get the customer records out of the database and finally ask the instantiated customerMapper to map each record to a Customer object and return all of these Customer objects as an ObservableCollection.

Keep in mind that the real power of this is that SQLMapperBase doesn’t actually know anything about class Customer, nor did SQLMapperBase’s developer need to know anything about it.

Now we can do the exact same thing for the other data we need to extract from the database and turn into a collection of type safe objects.  You see we know for a fact that all objects within the collection are in fact a given type that we specified.  If we specify the class as Customer, all our objects will be of type Customer.  If we were to follow the same pattern of code and specify Product as our class type then in fact all objects within our collection will be of type Product.

While this may not seem important at first glance, it effects how the compiler handles not only this code but the code that comes after.  It allows it to produce more efficient IL code than it would be able to do if these were simply of type object.  Not too mention the fact that while the SQLMapperBase class may seem difficult to follow at first, the code which uses it is relatively easy to follow, once you get used to the syntax and see the <T> as just a different kind of parameter that defines the type of object the class is working with.

Lambdas

Lambdas are simply a short hand way of writing functions.  This fits in very nicely with the viewpoint of the traditional C based programmer.  If you’re not aware their are/were competitions to see who could write the most functionality into a single line of code.  I’m sure based on my opening statement on creating code that is easy to maintain you can see my issue with this viewpoint.

So the question would be why do I have such an interest in Lambdas, if I prefer code that is easy to maintain and readily apparent as to its purpose?  It comes down to another guiding principle that says the less lines of code you have, the less their is to maintain and therefore a lower chance of errors.  Now it should be obvious that these are competing principles and neither are 100% correct in every case; they are more like guidelines.  The other main reason is their interaction with Delegates and Generics which we’ll delve further into shortly.

You’ll often see a simple example given to explain Lambdas:


x => x + 1;
int AddOne(int x)
{
return x++;
}

While line 1 is functionally equivalent to lines 2 through 5; it’s typical usage inside of other expressions is what tends to cause developers to freeze up quickly.  Which leads us to the topic of delegates.

Delegates

We’ll refer back to Microsoft again for a definition: “A delegate is a type that defines a method signature. When you instantiate a delegate, you can associate its instance with any method with a compatible signature. You can invoke (or call) the method through the delegate instance.”

Taking our example from above and modifying slightly:


int AddOne(int x)
{
return x++;
}

delegate int del(int x);

bool AreTheyEquivalent(int x)
{
int y = AddOne(x);

del delegateAddOne = a => a + 1;
int z = delegateAddOne(x);

return y==z;
}

And the answer would be yes they are functionally equivalent.  Outside of our function we create a delegate that can accept an integer and returns an integer.  Within our function we actually create an implementation that does something and then call that implementation to do the work.

This is fairly easy to follow, but doesn’t demonstrate the true beauty of Lambdas and Delegates.  It’s when you combine them with Generics that they really shine.

Let’s think back to creating a class that doesn’t actually know anything about the objects its working with.  We saw how to do this using Generics, but what if you needed to provide functionality that could find a given instance within the collection of objects without knowing anything about the objects.  You could try to create multiple searches that used the first integer property or the first string property for this purpose, but it wouldn’t really be useable in every possible situation.

What if the developer instantiating your class could pass you a delegate that did the comparison work?  After all, they know at that moment what the type of the object is and how to find the one they are looking for.

Hopefully I’ve inspired you to look into these topics further and see how they can help to improve your code and allow more opportunities for code reuse.  I also hope that they don’t seem quite as daunting as they may have when you first started reading it.

Dynamically Generate A Class From An Interface Part 3 Unit Tests

Monday, February 6th, 2012

In Part 1 and Part 2 of Dynamically Generate A Class From An Interface, we wrote code to be able to compose objects for use in Dependency Injection as part of an overall dynamic system.  We are now actually able to take any Interface definition consisting only of properties and not only turn it into source code, but compile and instantiate an object to return.

In this post, we’ll discuss unit testing the methods created previously using MSTest built into Visual Studio 2010. I hope that it’s easy to see the value in unit testing; how would it look if I simply posted the code for this series without any testing whatsoever? While I like to consider myself an expert developer, even I make mistakes occasionally while coding. Using MSTest is very simple and will automatically generate stubs for each method within your class; simply select New Test on the Test Menu and then use the Unit Test Wizard.

First let’s create an Interface to test with, it doesn’t need to be extensive. In fact, because we’ll need to hand code what we expect to have returned from Generate, we want to keep it simple. The point is to unit test as many cases as possible as quickly as possible, not spend longer writing the test then we did the code we’re testing.


public interface ITestInterface
{
string Name { get; set; }
int Value { get; set; }
string Read { get; }
}

Next we’ll test the Generate method, if you remember from Part 1 this method actually creates the source code for the class implementing the Interface. We’ll implement this test first since in order to test the object creation method we’ll need to first be able to successfully generate source code from an interface. Alternatively we could separate the code that creates the hand created source code to check against into a separate method which can be called by the Generate test for comparison against and the Create method as the input source code.



/// <summary>
///A test for Generate
///</summary>
public void GenerateTestHelper<T>()
where T : class
{
ClassCreator target = new ClassCreator();

ClassSourceCode actual = target.Generate<ITestInterface>();

//Checking the source code is a little more complicated
StringBuilder sourceCode = new StringBuilder();

sourceCode.Append("using System;");
sourceCode.Append("using OTSD.Test.Common;");

sourceCode.Append("namespace OTSD.Test.Common");
sourceCode.Append("{");

sourceCode.Append("public class TestInterface : OTSD.Test.Common.ITestInterface");
sourceCode.Append("{");
sourceCode.Append("public System.String Name { get; set; }");
sourceCode.Append("public System.Int32 Value { get; set; }");
sourceCode.Append("private System.String read;");
sourceCode.Append("public System.String Read");
sourceCode.Append("{");
sourceCode.Append("get");
sourceCode.Append("{");
sourceCode.Append(" return read;");
sourceCode.Append("}");
sourceCode.Append("}");
sourceCode.Append("private System.String write;");
sourceCode.Append("public System.String Write");
sourceCode.Append("{");
sourceCode.Append("set");
sourceCode.Append("{");
sourceCode.Append("write = value;");
sourceCode.Append("}");
sourceCode.Append("}");
sourceCode.Append("}");
sourceCode.Append("}");

ClassSourceCode expected = new ClassSourceCode("TestInterface", sourceCode.ToString().TrimEnd());

Up till here all we’ve done is hand code the source code we expect to get back for comparison purposes. Next we actually get to the test conditions. It’s fairly straightforward as we only have two properties within ClassSourceCode to ensure are set properly. We first make sure we received the desired classname "TestInterface" and then make sure that the source code also matches. In running through the test, I found that the only issue in matching the expected source code versus the actual source code turned out to be with the Carriage Return Line Feed characters. As these don’t actually impact compilation or indicate a problem with the results, I’ve chosen to take the quick and dirty route of removing "\r\n" from the actual source code for the assert.



Assert.AreEqual(expected.ClassName, actual.ClassName);
Assert.AreEqual(expected.SourceCode, actual.SourceCode.Replace("\r\n",""));
}

[TestMethod()]
public void GenerateTest()
{
GenerateTestHelper<GenericParameterHelper>();
}

Next, we’ll populate the tests for Create. As mentioned previously, you can either call the Generate method which we tested individually from within this test or alternatively, you can separate out the code that hand creates the expected source code for the interface into a separate method which can be called in both tests.

The interesting part of this test is that we have no real way to check to make sure that the object created is correct. Instead we settle for making sure that the object has successfully implemented the passed in Interface. For all intents and purposes this should do the same thing as it tells us that the source code compiled and we were able to instantiate an object which "is" the Interface. This guarantees that the object returned can be used anywhere that an Interface of T is expected, but does not ensure that the object doesn’t have extra "junk". In other words, we rely on our test of the Generate method and the check against hand created source code to ensure that the source code going in has only what we expect. The fact that these two unit tests pass provide us with reasonable certainty that the object returned from Create has only the expected properties.


/// <summary>
///A test for Create
///</summary>
public void CreateTestHelper<T>()
{
ClassCreator target = new ClassCreator();
ClassSourceCode classSourceCode = target.Generate<ITestInterface>();
T actual;
actual = target.Create<T>(classSourceCode);
Assert.IsTrue(actual is T);
}

[TestMethod()]
public void CreateTest()
{
CreateTestHelper<ITestInterface>();
}

Well we now have the ability to take any Interface with only properties and turn it into an instantiated object that we can work with, but it doesn’t really do anything. It would be far more interesting if the object were able to populate itself from a database when provided only with the key or even return a collection of populated objects from a database based on criteria.

This actually isn’t that difficult. While we could simply modify the source code we create to include Create, Read, Update and Delete (CRUD) functionality, it would be far more interesting to create a base class that has this capability using reflection to link the properties and database operations. Then the only change to the generated source code for the class is to inherit off of this base class. This sounds like a fun challenge which we’ll undertake in the coming weeks. Check back often for an article on a dynamic CRUD base class.

Dynamically Generate A Class From An Interface Part 2

Thursday, February 2nd, 2012

In Part 1 of Dynamically Generate A Class From An Interface, we started work on being able to compose objects for use in Dependency Injection as part of an overall dynamic system.  We discussed being able to take any Interface definition consisting only of properties and turn it into source code.  In this post, we’ll discuss turning the source code created by the method in Part 1 into a compiled class with an instantiated object being returned.

So let’s get started on a method which compiles the source code, instantiates an object of the given type and returns it to the caller.

Like the prior method, we’ll be using a generic type parameter to pass the Interface we’re working with and that we expect the returned object to implement, along with the ClassSourceCode object that contains the source code we intend to compile.  Of course, we’ll also declare a return object to hold the instantiated class which implements the passed in Interface.



public T Create<T>(ClassSourceCode classSourceCode)

{

T returnObject;


The CSharpCodeProvider class provided by the .Net framework makes it easy to compile code.  This is not a class that we can instantiate directly with new.  Instead we use the Factory method provided by the CodeDomProvider to create and return an instance of the class.  In this case, we’re specifying V4.0 of the .Net framework.  Keep in mind that most articles on the topic show an outdated method for compiling code which was the preferred method in older versions of the framework but is now obsolete.  The methods we’re using are the currently supported way of doing it at least as of Framework 4.0 and the writing of this article.



CSharpCodeProvider csharpCodeProvider =

(CSharpCodeProvider)CodeDomProvider.CreateProvider(

"C#",new Dictionary<string, string>{{"CompilerVersion","v4.0"}});


Next we create a CompilerParameters object and set a few of the parameters we’ll be using.  As we’re going to compile and immediately instantiate an object, we’re not looking for an executable or even a dll to be created.  We’re simply going to have it live in memory only.

We’ll also add the assembly that contains the passed in Interface as a referenced assembly.  If you take a look back in Part 1, you’ll see that the class we’re creating implements the Interface.  Our assumption is that this interface exists within the calling assembly; if this is not the case, you’ll need to modify the code to set a reference some other way.



CompilerParameters compilerParameters = new CompilerParameters();

compilerParameters.GenerateInMemory = true;

compilerParameters.GenerateExecutable = false;

compilerParameters.ReferencedAssemblies.Add(typeof(T).Assembly.Location);


Compiling the source code is just a simple matter of asking the code provider to do so with the parameters and source code.  The interesting piece is if errors are returned.  To keep things simple, we’re simply throwing an exception containing the very first error.  If you’re using this extensively, you may want to build a little more functionality around this piece.  Possibly even separate out compilation from instantiation to allow for creation of a UI to work with code in if you’re using this for more than just the scenario outlined in these articles.



CompilerResults compilerResults = csharpCodeProvider.CompileAssemblyFromSource(compilerParameters, classSourceCode.SourceCode);

if (compilerResults.Errors.HasErrors)

{

throw new Exception(compilerResults.Errors[0].ErrorText);

}


All we have left to do now is to use the compiler results to actually create an instance of the object and return it.  You now have not only a compiled class, but an actual object that you can manipulate in the calling code.



returnObject = (T)compilerResults.CompiledAssembly.CreateInstance(typeof(T).Namespace + "." + classSourceCode.ClassName);

return returnObject;

}

While this is a great start and took us through an interesting use of reflection and runtime code compilation; we’re still a little short on being able to use this for much.  This object has properties but doesn’t have the capability of doing anything real.  In a future article, we’ll discuss modifying the class that is created to inherit from a class that automatically will give the object basic database CRUD (create, read, update, delete) operations using reflection and no additional coding necessary.  But first, before we get to far we need to make certain that the code we’ve written works.

Dynamically Generate A Class From An Interface Part 3 Unit Tests

In the next article, we’ll create a unit test for each method using MSTest and make sure that we’re getting the results we want.