Wednesday, January 15, 2014

Interface level validations of DataObjects

Purpose

I was recently working on a project where I wanted to create the most granular interfaces possible and then group them into usable objects. My intent was to pass these objects around to shared utilities and services that would focus on the interfaces and thus be reusable by intent instead of specific implementation. For example, we would create an interface each for phone number, email and street address. These interfaces could be used on a customer, vendor, or employee object to ensure a standard implementation of the fields. Then, we could build our communications services to work off the interfaces for communication types and thus be generic to the entity being contacted. By doing this, I would be able to build singularly focused services that dealt with small tasks on data objects with a disregard for their specific implementation beyond the interface.

Validation

Being a big fan of data contracts with ability to run self-aware validations(e.g. validations that can only consider the scope of the contract itself without any relationships), I found myself needing a way to have all objects validate based on the requirements of the smaller interfaces, within the CRUD services of the manifested object. I also wanted to ensure the ability to dynamically add validation routines and have them picked up by the calling services without having to couple them together.

Solution

By creating a custom attribute and applying it to methods in the interface, I would be able to reflect through the methods of any object and trigger it to self validate using any and all methods that were added to the object from the implemented interfaces. Next, we simply call a data contract extension method that to invoke all validations. Here are the examples.
Custom Method Atribute

[AttributeUsage(AttributeTargets.Method, Inherited = false)]
public class InterfaceValidation : Attribute
{
    //...
}
Data Contract Extension

public static class DataContractExtensions
{
    public static T CallInterfaceValidations<T>(this T obj)
    {
        var methods = obj.GetType().GetMethods();
        object[] parameters = null;
        foreach (var method in methods)
        {
            var attributes = method.GetCustomAttributes(typeof(InterfaceValidation), true);
            if (attributes != null && attributes.Length > 0)
            {
                try
                {
                    method.Invoke(obj, parameters);
                }
                catch (Exception)
                {

                    throw;
                }

            }
        }
        return obj;
    }
}
Assign the attribute to a method

[InterfaceValidation]
void ValidateSomething();




Friday, December 27, 2013

Inversion of Control Pattern

Dependency Inversion Principle

The DIP states that high level classes should not depend on low level classes.  Both should depend on abstractions.  Details should depend on abstractions.  Abstractions should not depend on details.  The DIP is about reversing the direction of dependencies from higher level components to lower level components such that lower level components are dependent upon only the interfaces owned by the higher level components.  This is a method for moving to a more loosely coupled architecture.  Basically, you have to depend on a standard interface used by objects and not depend on their details.  This one might be best illustrated by example. 

In the following example, the BadCar class is tightly coupled to the actual implementation of the BadMotor function.  This means that these two objects are married, and changes to one directly impact the other.

public class BadMotor
{
    public Boolean Start()
    {
        Console.Write("Starting");
        return true;
    }
}
 
//tightly coupled to details.
public class BadCar
{
    public BadMotor Motor {get;set;}
 
    public Boolean Start(BadMotor badMotor)
    {
        Motor = badMotor;
        return Motor.Start();
    }
}
 
 
While this does function, it creates a dependency relationship between two objects at their implementation level.   This creates hardships for maintenance and scalability long term.   The proper way to build this relationship would be to invert dependencies onto interfaces to ensure no object has knowledge or visibility into implementation of any other object. Consider the following examples:
public interface IEngine
{
    bool Start();
}
 
public interface IGreenEngine : IEngine
{
    bool IsCharged ();
}
 
public class FourCylEngine : IEngine
{
    public bool Start()
    {
        Console.WriteLine("4Cyl Starting");
        return true;
    }
}
 
public class V8Engine : IEngine
{
    public bool Start()
    {
        Console.WriteLine("V8 Starting");
        return true;
    }
}
 
public class HybridEngine : IGreenEngine
{
    public bool Start()
    {
        if (IsCharged())
            Console.WriteLine("Hybrid Starting");
        return true;
    }
    public bool  IsCharged()
    {
        Console.WriteLine("Hybrid is charged");
        return true;
    }
}
 
public class Car
{
    public Car(IEngine engine)
    {
        Engine = engine;
    }
    public IEngine Engine{get ; set ; }
    public Boolean Start()
    {
            
        return Engine.Start();
    }
}
 
I built this interface dependency and inheritance to ensure that objects are loosely coupled and only share an interface.   This allows for actual base implementations to come and go and even be recognized dynamically without causing lower level objects to update their implementation.   Here is an example of hot swapping and scaling with the previously defined objects and interfaces.
class Program
{
      
    static void Main(string[] args)
    {
        //  Cars are only dependant upon an Engine interface
        Car BigThing = new Car(new V8Engine());
        BigThing.Start();
        //  Cars are only dependant upon an Engine interface
        Car SmallThing = new Car(new FourCylEngine());
        SmallThing.Start();
        // Since we have an interface dependency, it is easy to hot swap.
        BigThing.Engine = new HybridEngine();
        BigThing.Start();
    }
}

The output is as follows:


As you can see, this allows for maintainability and long-term scalability by ensuring that the objects stay out of each other's business. By adhering to the spirit of this principle as well as the previous SOLID principles, you can keep your code base healthy and easy to maintain when the requirement changes come.

Tuesday, December 10, 2013

OData , Atom and AtomPub

The Open Data Protocol (OData) is a protocol which standardizes the exposure and consumption of data. In times where data is being exposed at high rates and where consumers connect to more and more data endpoints, it’s important for clients to access these endpoints in a common way. OData builds on standards like HTTP, Atom, and JSON to provide REST access to controller based endpoints. Data is exposed as entities where each entity can be treated as an Http resource which makes it subject to CRUD (create, read, update, delete) and Patch operations.

Atom is way to expose feeds much the same way RSS does. Atom by itself allows only feed exposure. If you want to publish data, AtomPub (Atom publishing) provides this ability. AtomPub uses HTTP verbs GET, POST, PUT, and DELETE to enable data publishing.

This is not an implementation to be used in every situation obviously.  But it is an interesting flavor of having feed available data with real interaction.

Monday, November 18, 2013

Team Foundation Services is now Visual Studio Online

Microsoft officially launched Visual Studio Online (Formerly Team Foundation Services) last week.  The announcement came with news of many added features and benefits.  Here is an overview of the announcement:

Pricing

The good news is that if you have MSDN, you are most likely going to still not be charged for day-to-day usage. The bad news is that if you have product owner or product manager roles our your team and want them to use VSO for backlog interaction, etc.  They will now need to pay a membership fee.  The MSDN license that allows that access free, is the Ultimate level, which is the most expensive MSDN package there is.   This is going to cost at least $45 monthly for these employees (of course depending upon their role in your organization).  You can find the breakdown of pricing levels here.  The other financial consideration is that you will be charged for build time on any build and deploy jobs.  This is basically going to cost a couple of pennies per minute every time you do a CI style build. I did some investigation and found out that this does include publish time. In our world, that is roughly 65% of the time the build job is running.  So, you will pay the going rate while your build is being uploaded and deployed to the Azure instances for which you are already paying.

Link VSO with Azure

Link your VSO account to your Azure account and have a single portal for managing them both.  Very handy.  Details here.

Monaco

Monaco is a new development service specifically designed for building and maintaining Windows Azure Websites. With Monaco, developers have a lightweight free companion to the Visual Studio desktop IDE that is accessible from any device on any platform. Monaco is a rich, browser based, code focused development environment optimized for the Windows Azure platform, making it easy to start building and maintaining applications for the cloud.  Here are some cool videos on Channel9

Application Insights


With this “360 degree view” of your application, Application Insights can quickly detect availability and performance problems, alert you, pinpoint their root cause and connect you to rich diagnostic experiences in Visual Studio for diagnosis and repair. It also supports continuous, data-driven improvement of an application. For example it highlights which features are most and least used, where users get “stuck” in an application, where and why exceptions are occurring, which client platforms are being used with which OS versions, and where performance optimizations will make the biggest impact on compute costs.  You can sign up for the free preview here.  Following are some sample screen shots:


Dashboard
Visual Studio Integration

Environment Metrics

Monday, November 11, 2013

Quick method to optimize your foreign key searching

A lot of people do not realize that creating a foreign key does not also create an index.  This is by design and actually a good thing.  Over indexing a table can actually slow querying as every insert or update causes indices to be recalculated.  Over indexing a table will also slow down select statements from that table as the query optimizer will struggle through evaluating all of the indices to pick the one it thinks is best suited for your current search. 

When working on building targeted foreign key indices to speed up a search, I came up with this code block to auto generate the script for me.

SELECT 'IF NOT EXISTS (SELECT * FROM sys.indexes WHERE name = N''IX_' + FK.TABLE_NAME + '_' + CU.COLUMN_NAME + ''') 
BEGIN 
    CREATE INDEX IX_' + FK.TABLE_NAME + '_' + CU.COLUMN_NAME + '  ON ' + FK.TABLE_NAME + '(' + CU.COLUMN_NAME + ');
END
GO
print N''create IX_' + FK.TABLE_NAME + '_' + CU.COLUMN_NAME + ' done''; 
RAISERROR (N'' --------------------'', 10,1) WITH NOWAIT',
       FK.TABLE_NAME AS K_Table,
       CU.COLUMN_NAME AS FK_Column,
       PK.TABLE_NAME AS PK_Table,
       PT.COLUMN_NAME AS PK_Column,
       C.CONSTRAINT_NAME AS Constraint_Name
FROM   INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS AS C
       INNER JOIN
       INFORMATION_SCHEMA.TABLE_CONSTRAINTS AS FK
       ON C.CONSTRAINT_NAME = FK.CONSTRAINT_NAME
       INNER JOIN
       INFORMATION_SCHEMA.TABLE_CONSTRAINTS AS PK
       ON C.UNIQUE_CONSTRAINT_NAME = PK.CONSTRAINT_NAME
       INNER JOIN
       INFORMATION_SCHEMA.KEY_COLUMN_USAGE AS CU
       ON C.CONSTRAINT_NAME = CU.CONSTRAINT_NAME
       INNER JOIN
       (SELECT i1.TABLE_NAME,
               i2.COLUMN_NAME
        FROM   INFORMATION_SCHEMA.TABLE_CONSTRAINTS AS i1
               INNER JOIN
               INFORMATION_SCHEMA.KEY_COLUMN_USAGE AS i2
               ON i1.CONSTRAINT_NAME = i2.CONSTRAINT_NAME
        WHERE  i1.CONSTRAINT_TYPE = 'PRIMARY KEY') AS PT
       ON PT.TABLE_NAME = PK.TABLE_NAME
WHERE  PT.Column_name = 'ID'
       AND PK.Table_Name = '{COMMONLY QUERIED TABLE}'
       AND FK.TABLE_NAME <> PK.TABLE_NAME;  
In this statement, I would replace {COMMONLY QUERIED TABLE} with the table name holding the primary key that was used in queries often. This will generate statements to create an index if one does not exist. You could easily modify it to do a drop and add as well.

Friday, October 18, 2013

ASP.NET Identity for 4.5

ASP.NET membership has gone through many changes over the years. From simple membership to SQLProviders to OWIN, the needs of developers are constantly changing. .Net 4.5 has brought another change to the identity model. We have to let go of the assumption that users will log in by entering unique credentials to our application. Increasingly, users expect to leverage a single online identity to drive all of their web-based experiences (e.g. Facebook, Twitter, etc.) Developers should also want users to be able to log in with these social identities so that our applications can provide a rich and integrated experience to the users' online life.

Unit testing code should be a core concern for application developers. MVC is a great pattern and platform for those who want to unit test their code.  Now, you should easily be able to do that with the membership system. ASP.NET Identity was developed with the following goals (Verbatim from Microsoft):
  • One ASP.NET Identity system 
    • ASP.NET Identity can be used with all of the ASP.NET frameworks, such as ASP.NET MVC, Web Forms, Web Pages, Web API, and SignalR. 
    • ASP.NET Identity can be used when you are building web, phone, store, or hybrid applications.
  •  Ease of plugging in profile data about the user 
    • You have control over the schema of user and profile information. For example, you can easily enable the system to store birth dates entered by users when they register an account in your application. 
  •  Persistence control 
    • By default, the ASP.NET Identity system stores all the user information in a database. ASP.NET Identity uses Entity Framework Code First to implement all of its persistence mechanism. 
    • Since you control the database schema, common tasks such as changing table names or changing the data type of primary keys is simple to do. 
    • It's easy to plug in different storage mechanisms such as SharePoint, Windows Azure Storage Table Service, NoSQL databases, etc., without having to throw System.NotImplementedExceptions exceptions. 
  • Unit testability 
    • ASP.NET Identity makes the web application more unit testable. You can write unit tests for the parts of your application that use ASP.NET Identity. 
  • Role provider 
    •  There is a role provider which lets you restrict access to parts of your application by roles. You can easily create roles such as “Admin” and add users to roles. 
  • Claims Based 
    • ASP.NET Identity supports claims-based authentication, where the user’s identity is represented as a set of claims. Claims allow developers to be a lot more expressive in describing a user’s identity than roles allow. Whereas role membership is just a boolean (member or non-member), a claim can include rich information about the user’s identity and membership. 
  • Social Login Providers 
    • You can easily add social log-ins such as Microsoft Account, Facebook, Twitter, Google, and others to your application, and store the user-specific data in your application. 
  •  Windows Azure Active Directory 
    • You can also add log-in functionality using Windows Azure Active Directory, and store the user-specific data in your application. For more information, see Organizational Accounts in Creating ASP.NET Web Projects in Visual Studio 2013 
  • OWIN Integration 
    • ASP.NET authentication is now based on OWIN middleware that can be used on any OWIN-based host. ASP.NET Identity does not have any dependency on System.Web. It is a fully compliant OWIN framework and can be used in any OWIN hosted application.
    • ASP.NET Identity uses OWIN Authentication for log-in/log-out of users in the web site. This means that instead of using FormsAuthentication to generate the cookie, the application uses OWIN CookieAuthentication to do that. 
  • NuGet package 
    • ASP.NET Identity is redistributed as a NuGet package which is installed in the ASP.NET MVC, Web Forms and Web API templates that ship with Visual Studio 2013. You can download this NuGet package from the NuGet gallery. 
    • Releasing ASP.NET Identity as a NuGet package makes it easier for the ASP.NET team to iterate on new features and bug fixes, and deliver these to developers in an agile manner.