Monthly Archives: January 2012

Why you should use prefix increments instead of postfix

This is a pet peeve of mine, so bear with me. I really don’t like this code (curly brace language of your choice, but lets say it’s C#):

for (int i = 0; i < 10; i++)
{
   // Do stuff
}

I specifically do not like i++. If you don’t have a really good reason you should be writing ++i in almost every case. Even when it doesn’t matter, like in the example above, simply because it’s good style to do so.

The result in the loop above is exactly the same if it’s written like this:

for (int i = 0; i < 10; ++i)
{
   // Do stuff
}

So why does it matter? Because sooner or later it will bite you in the ass. You see i++ has the connotation that the l-value of any expression that uses it is the value before it was incremented. Which means that a copy must be made. Sure, the compiler probably optimizes this away, but what if this was an operator overloaded object?

Take a look at this, admittedly stupid but possible, object

public class Overloaded
    {
        private ExpensiveObjectToManipulate junk;

        static public Overloaded operator ++(Overloaded o)
        {
            o.junk.Increase();
            return o;
        }
    }

You see C# only provides one overload method for ++. In glorious C++ you get to overload it twice, in which case you also get the great opportunity to also mess up the semantics of the action. Or change the return type or other generally retarded actions C++ allows. But I digress.

So, C# gives you both prefix and postfix increment for an overload. Prefix does what you think it does. Postfix does not. Postfix will make a copy of the object, assign it to whatever is supposed to be the target of the expression then it will perform the operation on the ExpensiveObjectToManipulate. Now think about if you are using the Overloaded object in a loop. And ExpensiveObjectToManipulate is also expensive as hell to copy. It might be copied by value?

Do you trust your compiler to optimize it away?

Do you trust your vendor of stupid overloaded objects to not have different side effect from the copying?

This might not be a big problem in C# where this sort of operator overloading thankfully is rare, but if you ever get to the wonderful world of C++ you’re going to see this. There is no reason on this earth to use postfix operator with it’s inherent complications in so many cases.

var i = new Overloaded();  // Using fun object for making a point.
foreach (var x in collectionOfX)
{
   // Do stuff with x
   x.DoStuff();
   // increase the counter
   i++; // Why, oh gods, why?!?
} 

There is no reason to use the postfix! Yet I’m willing to bet that many of you are doing it. Same goes for every for loop. So let’s do the safe and optimized thing and use our friend the prefix operator every time you don’t mean to copy things.

Tagged , ,

Exception handling done the right way

I love exceptions, but I get a distinct feeling many people don’t when I see some horrific examples of their use. Especially from people who gets forced into using them by javas wonderful checked exception feature without truly understanding the concept. Or by people so scared of them that they hardly dare catch it, like way too many of the C++ crowd. In these enlightened days, exceptions don’t cause the problems they used to cause, unless you are working with old C++ code which doesn’t use auto_ptr. If you are, abandon all hope and refactor hard.

I’m going to assume you’re writing C# here, but everything I say goes for Java as well. It’s even better there since Java has checked exceptions. But without further ado here’s my handy-dandy guide how to do it right!

Why bother

Because it’s a graceful way of handling exceptional circumstances. Anything that is not considered a normal result of your method that might happen is an exceptional circumstance. Return values are not for exceptional circumstances. I assume you like the .NET framework or Javas similar environment. They’re nice because of exceptions! To see the alternative, take a look at for instance straight up COM in C++. It’s clunky as hell. Without exceptions, the only way to check for any errors is to check return values. Whenever you do something you’ll invariably get the real return value as a pointer-to-a-pointer and the method itself returns a HRESULT which you need to check. But of course, you won’t, since they almost never fail and you will forget. And you can’t read the code like it’s supposed to be read since the return values will be in the parameter list instead of in the actual return.

To compound the problem further C# has an out and a ref specification to parameters. Which is almost in all cases used for this sort of stuff, and almost always totally evil in my book for the reasons stated above. You should never ever do this sort of stuff:

string DoStuff(out BusinessObject bo) {
    // Doing stuff
    // ...
    // Business plan bad, return error
    return ".COM boom failed";
}

The only way to find out what went wrong it to do string compared on the return value. Evil. Return value checking should be a thing of the past for non-exceptional circumstances.

How do I do it then?

Lets start by how to not do it. Here’s a typical example of doing it wrong

void DoStuff() 
{
	// stuff goes wrong
	throw new Exception("Shit is hitting fans");
}

NO! You don’t throw the Exception class, never ever. I wish they made it abstract. This circumvents the catch statements intended use, makes it impossible to tell what’s going wrong and makes babies cry. If you specify an Exception type to catch it will catch every exception. Whatever it was you’ll have to figure out yourself. It might have been shit hitting fans, but it might as well have been a null reference. Maybe it was a division by zero. Who knows?

If shit is hitting fans, you throw a very specific exception relating preferably to that case and no other case.

public class ShitHitFanException : Exception 
{
    public ShitHitFanException(string message) : base(message 
}

void DoStuff() 
{
	// Danger Will Robinson!
	throw new ShitHitFanException("Scotty can’t change laws of physics");
}

And you catch it by catching the exception you need to handle, permitting the stuff you’re incapable of handling to bubble up to the surface. Maybe even to a crash, if that’s your cup of tea.

try
{ 
    DoStuff();
}
catch (ShitHitFanException e) 
{
    // Alert Will Robinson of danger.
}

This leads to my next point. You need to use base classes for your exceptions if they have any sort of common ground. Just routinely declaring exceptions as being subclasses to Exception like a big happy community of equals is evil. You need to categorize your exceptions into types of exceptions. For instance, if you have some sort of method which handles files, you might be throwing FileNotFoundException, DirectoryAccessDeniedException and DiskFullException. These have common ground, and should be labeled as such by inheritance. You need a IOException base class. This makes your catching code able to intelligently decide if it wants to handle exceptions specifically or more sweeping without resorting to acts of evil like catching Exception.

More importantly it also enables you to do cool stuff like

void ShuffleImportantDocumentsFromTopDrawer(Document[] topDrawer) 
{
    try
    {
        foreach (var document in topDrawer) 
        {
            try
            {
                document.shuffle();
            }
            catch (FileNotFoundException e) 
            {
                // Dog probably ate it. Log a warning and continue
            }
        }
    }
    catch (IOException e) 
    {
        // We end up here for BOTH DiskFullException and 
        // DirectoryAccessDeniedException
        CallTheItGuy();
    }
}

This is impossible to do gracefully if you have screwed up your inheritance chain or are catching Exception itself. Exceptions have no relation to another, and you can’t catch two similar exceptions handling them the same way without resorting to copy and paste or putting the handling in a separate method, which is weird and ugly. This is oftentimes a problem wih Java file IO. They throw exceptions like there’s no tomorrow and you’re required to catch them all. You probably want to handle some of them the same way. Catch the base class and you’re set.

But, I don’t want to handle it you say. I want someone else to do it for me. First of the bat, lets never ever do this again:

try 
{
    FaultyMethod();
}
catch (FaultyException e)
{
   // Whoops. I cant handle this stuff. Do some clean up and signal errors again
   throw new OtherException("There was a problem with faulty method, big surprise");
}

NO. There’s one huge glaring problem here. You are marshalling away the cause of the exception! The hard and fast rule is: If you are throwing something from inside a catch, you need to supply the exception you throw with it’s underlying cause. Here is the corrected code:

   throw new OtherException("There was a problem with faulty method, big surprise", e);

See how we added the e to the constructor. This makes a world of difference! When you print a stack trace, you’ll be pointed to the correct line, instead of being pointed to the catch block which says absolutely squat about the real nature of the problem.

Another point I’d like to make is that you shouldn’t throw another exception unless you have changed the meaning. But you can still catch stuff and log. As an example, say you have a SOAP service. Since you are diligently applying the principles in this article, you’re using exceptions to handle SOAP errors (they serialize fine, don’t worry about it). But you want a log on the server side as well of any exceptions sent to the client. Enter the second (under)used use of the throw keyword – the parameterless throw.

string MySoapyMethod(string param) 
{
    try 
    {
        // Do cool stuff that fails...
        BrokenCode();
    }
    catch (BrokenException e)
    {
        // Log this stuff so that we know about it
        Log.Error("Broken stuff in Soapy method", e);
        throw;
    }
}

This will throw the exception without altering it in any way. This is the other legal way of handling stuff you don’t want to or can’t handle. A final point, when you log using your logging framework, such as log4net, you typically get an option to pass an exception. Use it. If you have an exception logging, never log Log.Error(e.Message). Always log Log.Error("A real description of what went wrong", e). That way, you’ll still preserve the precious stacktrace.

Tagged , ,

MVC Validations 4: Whole class business object attribute validations

Sometimes you need to validate an object where the column data itself isn’t enough to determine if the object is valid from a business perspective. However, this doesn’t mean you have to abandon attributes for validation!

As an example say we have this class:

public class BusinessObject
{
    public int PropA { get; set; }
    public int PropB { get; set; }
    public int PropC { get; set; }
}

Our BusinessObject class here, say it’s only valid if the three properties have a sum of 100. Obviously we can’t set max and min values on the properties themselves to achieve this. What we need is an annotation attachable to the class itself. It works like this:

[AttributeUsage(AttributeTargets.Class)]
    public class NumberSumAttribute : ValidationAttribute
    {
        public override bool IsValid(object value)
        {
            var businessObject = value as BusinessObject;
            if (businessObject == null)
            {
                throw new ArgumentException("This annotation is not attachable to this object class");
            }
            return businessObject.PropA + businessObject.PropB + businessObject.PropC == 100;
        }
    }

Attach it to the head of your class like this

[NumberSumAttribute(ErrorMessage = "All fields must add up to 100.")]
public class BusinessObject
{
    public int PropA { get; set; }
    public int PropB { get; set; }
    public int PropC { get; set; }
}

That’s pretty much it. An obvious sticking point here is the class cast, but as far as I know I cannot restrict the validation attribute to only being applyable to some types. It’s possible to achieve this in some fashion by fidding around with access restrictions and inheritance like this if you really think you need this compile time security.

Oh, and since you won’t get the validation message on a field anymore you’ll need to stick the class generic validation messages somewhere. Add @Html.ValidationSummary(true) where you want these messages to appear, and you’re good to go!

Tagged , ,

Things I would like C# to have

In general C# is a great language, but being a long-time polyglot there are two things I’m really missing which I think would make the language better.

Checked exceptions

I come from a long stint in Java, which has these, and I really don’t understand the reasons for not including them in C#. As far as I know this is the answer straight from the horses’ mouth. For those that don’t know what checked exceptions are; they are exceptions that you have to either catch or throw to the next guy. Like this in java:

void foo(int x) throws SomeException()
{
   if (x == 42) throw new SomeException();
}

Any code using foo have to either catch it or declare that they also throw it. This causes an automatic documentation of what serious errors can happen in a function, since you are required to handle it. When I program in C# I just don’t know what bad things can happen in my code without looking at some other documentation than what intelliSense + resharper gives me. I find this scary, and I constantly wonder if I’m having holes in my application.

For this reason, I would really like to have this in C# in some shape or form. I don’t buy the reason of version compatibility or that people are sloppy and just go re-throwing everything. If you have that problem, your developers are at fault and you can’t design a language based on that.

Const the way it should be

Have you ever done C++? Have you ever done the const keyword the way it’s supposed to be used?

Const in C++ isn’t just a replacement for an evil #define. Oh no, it’s a whole new world once you understand it. Here’s a valid class signature in C++:

class Foo
{
    const Foo const * bar(const Foo const * foo) const;
}

In understandable terms, this it what the signature says: A method named bar takes a pointer whose location may not change and that may not be used to change whatever it points to returns a similar unchangeable pointer while not altering the class in any way.

Wow.

The final const on the method is the really interesting one. It says that calling this method on the class will not change the class at all. So if you have a const instance of the class, you can all all the methods that guarantee that the class won’t change – but you can never call the non-consted ones.

Also the const* is interesting. Const in C# means I cant change what the reference points to. That is what the const before the class name stands for in C++. But the const* means I cant change the actual object at the end of my pointer.

Another awesome thing is that you can overload methods by just varying their const-ness.

Had I had this in C# I could writer safer code with less change of side effects. Also it would not really impact existing code that much. I don’t get why C# doesn’t have it. Perhaps a constref keyword to not break existing stuff? I get the feeling the real reason for not doing this is because most people doing C++ don’t understand how awesome const is. Whenever I do C++ I feel it’s almost my sacred duty to promote it’s proper use.

How about you, is there anything you miss? Please do leave a comment, I’d love to hear from you!

Tagged , ,

MVC Validation 3: Unobtrusive validation in Ajax-loaded partial views

It’s a pretty good idea usually to not reload the entire page, but if you use MVCs now-standard unobtrusive validation you’ll find that the validation does not work on the partial you’ve just loaded.

Here’s how you solve this. Say you have a page which uses a script to load a view using ajax to a div, something like this:

function loadStuff()
{
    $('#myDiv').load(window.location.pathname + "/Stuff");
}

If you do this and don’t do anything else, the unobtrusive validation won’t work at all. If you try to brute force your way of of this by including the unobtrusive validation javascripts again on the partial view, the validation will kinda-sorta-work. But really, it doesn’t work. Instead what really want to do is to reparse and rebind the validation. Including this at the bottom of the partial view you want to load will work.

<script type="text/javascript">
    $(document).ready(function () {
        $.validator.unobtrusive.parse("#myDiv");
    });
</script>

I’m not sure if it would be a better fit to place this in the ajax-loader method instead. I think I had some problems with that and went with putting this in the bottom of the page. You can reparse the entire document if you want to no real ill effect, but parsing the div you just loaded is probably more efficient.

There’s nothing more to it really, but it’s a real gotcha that at had me confused at first.

Tagged , , ,

Finding memory corruption bugs

If you’ve never programmed in C/C++ or any of those languages that require manual memory management or pointers chances are you won’t know the absolute horror that is finding these sorts of bugs. They are evil incarnate. One seemingly harmless pointer is left dangling after the object it pointed to has been deleted. But the pointer isn’t null. Oh no, it’s still pointing out into the void, ready to write bogus data into whatever’s there. Your application will crash, intermittently and sometimes only in production code. Bonus points if it doesn’t happen if the debugger is attached.

These sorts of bugs are extremely hard to find, especially if the codebase is large/old. Recently I came across one of these, which had been causing havoc for quite some time, and set upon the task of fixing it.

There’s a few ways to go about this, you could swap out the memory manager for something of your own design and hope you can find it. You can scale down the code while having the bug still reproduceable in the hope of finding the bug by noticing when it went away and deducing that it must have been in that code you just deleted.

Or try to understand it it more detail. A duo of tools have always done it for me: The memory view in Visual studio and the application verifier.


The app verifier is a magic piece of software that somehow attaches into the executable when running and will cause breakpoints if you mangle memory. I don’t know how it does it, but it does. Download it here. It’s easy enough to use, you add the executable in application verifier, check that it should verify application memory handling and then run it, either normally or through a debugger. Application verifier should stop the execution dead in it tracks much closer to the actual cause than what you get if you just run it.

As an example, in my latest case without the verifier the app crashed deep within Win32 when opening a window. With the verifier, it crashed in our own code – and much more reliably. Once you get it to this state, open the disassembly. Yes, the disassembly. You’ll have to go back one or two steps to get into your own code instead of the verifier inserted code. There, try to see what memory access has offended the verifier. You probably have something like a DWORD ptr that is being dereferenced. Check that memory address in the memory view.

The memory view, besides its best feature of looking like the matrix if you use green on black fonts is quite useful. You’re to identify if you are: writing to uninitialized memory, double-freeing a pointer or overrunning a buffer. Sometimes there are sentinel bytes, especially in the memory managers debug mode, which famously in win32 maps both 0xDEADBEEF and 0xBAADF00D to various states of the debug heap. Sometimes you can see strings which gives you an indication as to where in the code you are.

Either way, you should be much closer to the problem now. In my case, it was an unhandled realloc return which in previous memory management implementations never moved the buffer, but now it did.

Application verifier probably won’t give you the exact line of offending code, but it’ll get you a whole lot closer!

Tagged , , ,

MVC Validation 2: Validating for column uniqueness

I find a common problem is handling the case of a column needing to be unique and how to handle this validation scenario. It’s easy enough to set a constraint in the backing SQL server, but handling this by catching an exception and then transferring this to the model state seems like a bad solution. What we want is to present a somewhat generic solution to this problem and end up with something like this:

class MyEntity {
    int Id { get; set; }
    
    [Unique(ErrorMessage="Name must be unique")]
    string Name { get; set; }
}

I want this working out of the box without having to do anything at all in the validation page. This presents a few problems. Mainly the issue of how to get the database context to the UniqueAttribute class that is the source of our [Unique] attribute, and how to construct these queries.

For the first problem there are a few choices, you can either set a ValidationContext manually and add your context as a dictionary parameter in the items field of the ValidationContext constructor. This solution is problematic as no of the built-in validation methods that gets run my MVC and Entity framework will provide that parameter. So that solution is out.

The second solution is to use the IServiceProvider interface, which is kinda sorta an inversion of control container. However, I found this to be a bit inflexible, and without using IServiceProvider to wrap a real dependency injection framework, which according to some is an anti-pattern, we’d end up with a new DbContextfor each instance.

So, using the solution presented earlier, this is what I came up with:

using System;
using System.ComponentModel.DataAnnotations;

[AttributeUsage(AttributeTargets.Property | AttributeTargets.Field, AllowMultiple = false)]
public class UniqueAttribute : ValidationAttribute
{
	public String[] AdditionalMembers { get; set; }

	protected override ValidationResult IsValid(object value, ValidationContext context)
	{
		var uniqueValidator = IoC.Get<IUniqueValidator>();
		if (uniqueValidator == null)
		{
			throw new ValidationException(
				"No IUniqueValidator found in UniqueAttribute. Cannot determine uniqueness");
		}

		// MemberName may not be intialized when this is called. It will be later on
		if (context.MemberName != null)
		{
			if (!uniqueValidator.CanPropertyBeUniquelyStored(context.ObjectInstance, context.MemberName, AdditionalMembers))
			{
				return new ValidationResult(ErrorMessage, new[] {context.MemberName});
			}
		}

		return ValidationResult.Success;
	}
}

A few comments on this, it uses the IoC wrapper class described in a previous post. If you’re using a dependency injection framework out of the box, substitute this with whatever Kernel call you need to use. This in turn returns an interface that does the actual column uniqueness check. It would have been preferable to use constructor injection, but this won’t play nice with how .NET instantiates the attribute classes.

I’ve also included an AdditionalMembers to handle compound unique members. The member you attach the attribute to will get the validation message, the others will be validated along with the annotated member. Lets have a look at the interface.

using System;

public interface IUniqueValidator
{
	bool CanPropertyBeUniquelyStored(object entity, string propName, string[] additionalMembers);
}

No big deal, pass the entity, the property name and any additional members you want validated along with the member. Now for the implementation of this interface.

using System;
using System.Collections.Generic;
using System.Data.Entity;
using System.Linq;
using System.Linq.Dynamic;

internal class UniqueValidator : IUniqueValidator
{
	private readonly IDbContext dbContext;

	public UniqueValidator(IDbContext dbContext)
	{
		this.dbContext = dbContext;
	}

	public bool CanPropertyBeUniquelyStored(object entity, string propName, string[] additionalMembers)
	{
		var entityType = entity.GetType();
		DbSet dbSet = dbContext.Set(entityType);

		var uniqueProperties = new List<string> {propName};
		if (additionalMembers != null)
		{
			uniqueProperties.AddRange(additionalMembers);
		}
		int propertyIndex = 0;
		string query = string.Join(" AND ", from a in uniqueProperties select GenerateQuery(a, entityType, propertyIndex++));
		object[] values = (from a in uniqueProperties select GetValue(a, entity, entityType)).ToArray();

		IQueryable list = dbSet.Where(query, values);

		var idProp = entityType.GetProperty("Id");
		var entityId = idProp.GetValue(entity, null);

		// Iterate through the list, if there is any entity which does not have the same ID 
		// as the one we are trying to insert, this property is not unique
		foreach (var listEntity in list)
		{
			var listEntityId = idProp.GetValue(listEntity, null);
			if (!listEntityId.Equals(entityId))
			{
				return false;
			}
		}

		return true;
	}

	private object GetValue(string propName, object entity, Type entityType)
	{
		var propertyInfo = entityType.GetProperty(propName);
		var propType = propertyInfo.PropertyType.Name;
		return propertyInfo.GetValue(entity, null);
	}

	private string GenerateQuery(string propName, Type entityType, int propertyIndex)
	{
		var propertyInfo = entityType.GetProperty(propName);
		var propType = propertyInfo.PropertyType.Name;            

		return propName + " == @" + propertyIndex;
	}
}

Again, we rely upon dependency injection to inject a IDbContext interface to the constructor. This is a simple interface derived from the Entity framework derived DbContext that provides us with at least the Set(Type entityType) method to get our hands on a DbSet. To construct the queries we use DynamicQuery, a smallish library that allows us to construct queries based on string constructs. This particular code relies on the primary key being called Id and being an Integer.

So, in all, the solution is easy enough if you have a working dependency injection framework. And if you don’t it should be easy enough to modify into using the MS IServiceProvider interface instead and creating a real DbContext instead of getting a derived interface injected. No reason to write custom validation outside of attributes either way.

Tagged , ,

MVC validation part 1: Custom unobtrusive validation using attributes

The validation process you get from MVC is pretty damn sweet. Just annotate your class, or entity for that matter, and you get instant validation in both your views and in your repository. It’s easy enough to use, just annotate your properties with whatever validation you want to use, and the validation appears. For reference, this is how it’s done:

 

public class MyValidateableObject 
{
    [Required(ErrorMessage = "You have to input something")]
    [StringLength(50, ErrorMessage = "It's too long!")]
    public string MyString { get; set; }
}

To get this to work with MVC, you just use Html.EditorFor(…) as you normally would. Also include a Html. ValidationMessageFor(model => model.MyString) to get the validation message to the client. To enable the client side unobtrusive validation in your app, add this to your Web.Config

<appSettings>
  <add key="ClientValidationEnabled" value="true" />
  <add key="UnobtrusiveJavaScriptEnabled" value="true" />
</appSettings>

Also, do not forget to include the javascript files. You can typically do this in _Layout.cshtml, since your application probably will use these in most if not all of the views.

    <script src="@Url.Content("~/Scripts/jquery.validate.js")" type="text/javascript"></script>
    <script src="@Url.Content("~/Scripts/jquery.validate.unobtrusive.js")" type="text/javascript"></script>

With that out of the way, how do you validate something that isn’t already provided by the validators? And how to make this work client side? Here’s how!

First, you need to create a new attribute to use. Here we are going to create a validation that will accept any string with a length evenly dividable by a specific number. The annotation is pretty straightforward:

[AttributeUsage(AttributeTargets.Property | AttributeTargets.Field, AllowMultiple = false)]
    public class StringLengthDivideableAttribute : ValidationAttribute
    {
        public int Divisor {get;set;}

        protected override ValidationResult IsValid(object v, ValidationContext validationContext)
        {
            if (v != null)
            {
                string value = (string)v;

                if (value.Length % Divisor != 0)
                {
                    return new ValidationResult(ErrorMessage, new [] { validationContext.MemberName });
                }
            }

            return ValidationResult.Success;
        }
    }

Note that the class name must end with Attribute. There are two methods to override, one which does not have a validationcontext available, and one who does. I suggest overriding the one with the validation context. It’s far more flexible than the other one. With this annotation in place you should be able to add this to your class

public class MyValidateableObject 
{
    [Required(ErrorMessage = "You have to input something")]
    [StringLength(50, ErrorMessage = "It's too long!")]
    [StringLengthDivideable(Divisor=2, ErrorMessage="Length not divisible by two")]
    public string MyString { get; set; }
}

Doing this will cause server side validation, and if you use it, entity framework validation validates too. But you won’t get the really nice unobtrusive validation. To get that, we need to provide three things: Flag the attribute as client validateable, provide a mapping for unobtrusive to use and make our own javascript function. Lets start with the attribute.

Change the attribute to also implement the IClientValidatable interface. This is found in System.Web.Mvc. This interface requires you to implement one method. Here’s what you need to do:

        public IEnumerable<ModelClientValidationRule> GetClientValidationRules(ModelMetadata metadata, ControllerContext context)
        {
             var rule = new ModelClientValidationRule
                             {
                                 ErrorMessage = ErrorMessage,
                                 ValidationType = "stringdivisible"    
                             };
             rule.ValidationParameters.Add("divisor", Divisor);
             yield return rule;
         }

The ValidationType string is the important bit. That’s the key for the unobtrusive binding, which is the next step. Equally important is the ValidationParameters, which will become accessible to your javascript validator. Speaking of that, after you’ve included your jquery.validate.unobtrusive.js, add this:

jQuery.validator.unobtrusive.adapters.add('stringdivisible', ['divisor'], function (options) {
    var params = {
        divisor: options.params.divisor
    };

    options.rules['stringdivisible'] = params;
    options.messages['stringdivisible'] = options.message;
});

This code will match the divisor passed as a parameter in the ValidationParmeters to the params option and enable the rule used for the actual validation. If you don’t have any parameters to your function, pass true to options.rules[‘stringdivisible’].

You also need to add the actual method that will perform the validation

jQuery.validator.addMethod('stringdivisible', function (value, element, param) {
    if ((value.length % param.divisor) == 0)
        return true;
    return false;
}, '');

There you have it. That should be the total amount of work needed to make things run on the client side. Give it a try, hopefully most postback-only validation might be a thing of the past.

Tagged , ,

Dependency injection across assemblies

When working at a project I came across a ground ripe for dependency injection. The current team had done a pretty good work of extracting interfaces, introducing a data access and a rudimentary service layer but never went the final step of actually introducing a dependency injection framework to handle the nitty gritty details. Dependency injection without an inversion of control container will at some point result in someone instantiating the implementing classes. Or a lot of factories. That won’t do.

I settled upon trying Ninject, since it looked easy enough to set up using my now-standard method of using NuGet for everything. Doing the default thing here will result in your MVC app being provided with an App_Start folder with a Ninject class that does the bootstrapping (courtesy of WebActivator, another nice library all in itself). In the end, you’ll end up with a IKernel on which to register your bindings using a pretty nice syntax:

kernel.Bind<IMyInterface>().To<MyImplementation>().InWhateverScopeYouWant();

Once bound, you’ll end up with a constructor injection pattern out-of-the box with each Controllers dependencies automatically resolved. Nice!

Given pretty much free choice over the dependency injection framework I was given one key rule, there is to be no dependencies to this framework outside of the assembly that contained the actual bootstrapping. Everything, all interfaces and all implementing classes, were contained in different assemblies.

Ninject gives you two solutions to this issue, you either reference every assembly from the bootstrapping project and go all out in with IKernel.Bind() in your bootstrapper, or you use a set of modules. Modules in Ninject are small classes that can outsource the binds to an assembly local implementing class. Neither of these solutions are good enough.

The first one, besides the tremendous amount of references to everything and everyone will result the implementing classes being public, since the bootstrapping will need to know their signature for the binding to work. I really want my interface-implementing classes to be internal, to avoid any “accidental” instantiation.

The second one is a lot better, having no such problems with the access modifier. However, it carries a requirement that each of the containing libraries will have to include a reference to Ninject. It’s not a huge library by any means, but a dependency is a dependency and if Ninject didn’t suit our needs it’d be pretty hard to replace.

So, we need a solution which can work across modules, and doesn’t carry a reference to the actual framework. This is what I came up with after some thought.

    using System;
    using System.Linq;
    using System.Reflection;

    public static class IoC
    {
        public enum Scope
        {
            Transient, Request, Singleton
        }

        private static IDependencyResolver container;

        public static T Get<T>()
        {
            return container.Get<T>();
        }

        public static void Initialize(IDependencyResolver c, bool autoLoadModules = true)
        {
            container = c;

            if (autoLoadModules)
            {
                // Find all assemblies and all classes implementing the IDependencyLocatorModule
                // for each of these, register the dependencies
                Assembly[] assemblies = AppDomain.CurrentDomain.GetAssemblies();
                foreach (var assembly in assemblies)
                {
                    var q = from t in assembly.GetTypes()
                            where t.IsClass && typeof (IDependencyLocatorModule).IsAssignableFrom(t)
                            select t;
                    q.ToList().ForEach(t => LoadModule(t, container));
                }
            }
        }

        public static void LoadModule(Type type, IDependencyResolver resolver)
        {
            ConstructorInfo constructorInfo = type.GetConstructor(Type.EmptyTypes);
            if (constructorInfo == null)
            {
                // Log an error
            }
            else
            {
                var locatorModule = constructorInfo.Invoke(new object[] {}) as IDependencyLocatorModule;
                if (locatorModule == null)
                {
                    // Could not instantiate the locator module, log an error and continue
                }
                else
                {
                    locatorModule.RegisterDependencies(resolver);
                }
            }
        }
    }

    public interface IDependencyResolver
    {
        void RegisterDependency(Type interfaceType, Type concreteType, IoC.Scope scope = IoC.Scope.Transient);

        T Get<T>();
    }

    public interface IDependencyLocatorModule
    {
        void RegisterDependencies(IDependencyResolver resolver);
    }

This solution does quite a few things. IoC gives a class to hold a wrapped IKernel, since the rest of the code cannot, and must not use it directly. So far this container has seen very little use, but it’s there. The real magic happens in the Initialize method. Since we are in the startup project, in a sense, all the needed assemblies will have been loaded come the bootstrapping. Since we’ll place all this dependency injection stuff in it’s own separate library – and it’s all good old safe C# anyway, everyone can access this stuff.

So, every library that needs dependency injection needs to provide just one class. Place it anywhere you like in the project, make it public, implement the IDependencyLocatorModule and you get correct bindings without either a reference to any specific framework or compromising access restrictions.

The RegisterDependencies() method of IDependencyLocatorModule is really just a sample. Ninject contains a lot more functionality in itself than that, but since no one uses them yet I like to use the keep-it-simple argument at leave that until it’s actually needed. The wrapping itself of Ninject in a IDependencyResolver was dead simple, and it should not be a problem to use it no matter what framework you actually choose.

Tagged , ,

Naming..

Whenever I start to write something, the same question invariably pops up: “What do you want to call this/yourself”. I’ve pretty much resigned myself to just give up on cool aliases and just go by real name, so the yourself bit should be covered. It has even went so far that my large Nord sneak archer currently roaming skyrim is named Per as well.

But the other problem remains, and starting this blog it once again reared its ugly head. But in a flash of inspiration, Binary sculpting came to me. It’s actually a pretty old quote of a co-worker of mine back in the days when I did games development.

You see, back in those days in about 2003, games programming still had that old school feeling to it. I suspect much of it is gone now with the age of robust engines and third party technology such as SpeedTree and Havoc. In those days we still wrote our engines ourselves, and almost started afresh with each new project, and with programmers who had been in the game since the early ninties there was  prevalent culture of admiring the age old “squeezing the last bit of performance” out of the machine. Hell, we even wrote our own level editor! But this apparently was nothing compared to the glory olden days.

The project that just wrapped up when I started working there back in -02 was a rally game. It also had a level editor, handwritten by “some guy at codemasters”. Codemaster must have been an aptly named company, because this level editor had been hand-written entirely in x86 assembly. Mind you, this editor wasn’t some low tech 2d editor. Oh no, this was a full blown 3D editor with mesh editing and all that good stuff. When telling me this, our audio programmer guy got this dreamy look in his eyes and uttered the magic words: “That guy, at codemasters, he was a true binary sculptor”.

While this kind of binary sculpting almost never happens in these days, at least not at that scale, I’d like to take this expression as my own and try to further it into the -10s.

Tagged