Onion Architecture

Recently I have been writing some applications using the Onion Architecture one of which is here: Github.  I guess I’ve been using this pattern for a long time, I just didn’t know the name for it until I read Jeffrey’s blog, which is a very good read.  One thing that you have to consider though is that, is it worth adding the complexity that this architecture enforces for your particular application.  And by complexity, I mean not the usual way of referencing each section (Presentation –> Business –> Data), which most of the developers will already be familiar with.  I wouldn’t recommend using this on a small scale application where the traditional layered architecture will suffice, I just don’t think the added complexity will payoff. However, once you get the hang of how the different sections are separated and wired together at runtime, it is really not that bad.

Making the Core layer completely independent and only having to deal with interfaces promotes loose coupling and makes it a highly testable system, which I really like. Also putting the infrastructural concerns on the outside and not making it the main layer that all other layers depends on makes complete sense.  If you think about it, the Core application layer shouldn’t care or have any inclination about where the data comes from, whether it is cached or not, how the logging is done, etc,.  It should only concern itself with the domain/business logic that it has to deal with and nothing more. 

Testable Object pattern

UnitTests should be given the same level of respect and care that we put into writing production code, if it is not readable and maintainable then you might as well remove it.  I’ve always struggled with refactoring test code that contains mocked objects that I want to invoke some method on within my tests.  This is until I came across Brad Wilson’s testable object pattern, which really gave me that ah-ha moment.  It is simple to create and easily readable which is the perfect combination.  However, you still have to do the dirty dance of determining the fine balance between how much refactor you want to do as opposed to it still being readable.

Assume that you’ve a bunch of test that looks something like this:

oldtest

Creating the mock objects and inserting them into the PromoController seems pretty redundant if you’ve to do it more than a few times.  The way you refactor it is by creating a new class that inherits the class that is being tested (in my case PromoController) and it exposes the two mocked objects as its fields.

testableObject

You can make the two mocked fields as auto-properties if you like but when dealing with tests, I try to avoid it because it adds more noise to the code without giving me any benefits.  Now, the previous test can be refactored using this new TestPromoController class while still having access to its mocked objects.

newtest

Happy Summer! Mug

Using Ninject with constructor parameters

Before I even begin to write about this topic let me clarify one thing.  This is an example of the infamous Service Locator pattern or some call it an anti-pattern.  There are tons of articles on the internet, and I suggest you read some of them before you begin to implement the following sample code.  However, there are some occasions where using the Service Locator pattern might be your only option, possibly when dealing with some badly architected legacy code for example. 

One of my requirement was that I needed to pass in constructor arguments that could only be defined at runtime.  None of the search results were helpful because half of them just talked about using the “WithConstructorArgument” method of Ninject when binding my classes, which only works for arguments that can be defined at compile time, and the other half search results started preaching why I shouldn’t use the Service Locator pattern, not very useful.

You could find the complete running code at GitHub: https://github.com/tenzinkabsang/IoC_ServiceLocator

After bunch of fiddling around with Ninject code and going through the documentation I was finally able to come up with a solution.  Basically, Ninject does not expose any of its binding information, more specifically the binded types, which I need in order to get information about its constructor parameters and such.  So, the way I extract type information is by appending the “Bind” method with the “WithMetadata” method for any class that requires runtime arguments.

module

After I add the type information into the binding for any dependencies that requires runtime constructor arguments, I can use this metadata information when resolving it via the key.  The following code shows my Resolve<T> method which handles both cases, with constructor arguments and without.  When a type needs to be resolved, if there are no arguments passed in then it calls the simpler version of “kernel.Get<T>()” and this should be called for 90% of the request because having runtime constructor arguments should be rare.  However, when arguments are passed in, it uses the metadata to get the registered type information and uses reflection to get to its constructors and constructor parameters.

ioc

And the way you use this is as follows:

sample_usagePNG

In conclusion, by adding the metadata information when binding dependencies you can get the binded type information in Ninject.  I guess the hard part is trying not to forget to add this extra bit of information, in which case a runtime exception will be thrown – which by the way is very descriptive.

JavaScript: function expression vs. function statement

In JavaScript, all of the variables and functions are subject to hoisting.  It simply means that the compiler will move both functions and variables to the top of its parent scope. Lets look at an example that depicts how hoisting affects function expression and function statements differently.

javascript_before

Both foo1 and foo2 are called before they’re declared, however only the call to foo1() works and calling foo2() throws the following exception “Uncaught TypeError: undefined is not a function”.   Lets look at how the compiler will change the above code.

javascript_after

Both functions are hoisted to the top and the variable foo2 used for the function expression is set to undefined first, then only assigned a function value where it was originally declared – and thus the undefined error above.

Hoisting also happens within a function scope itself, if for example you declare two variables (name and age) anywhere within your function – those two variables will be hoisted to the top of your function and set to undefined.  Therefore, it is always safer and recommended that you declare all of your variables at the top if you want to avoid surprises brought on by hoisting.

Input validation with jQuery

The jQuery validation plugin provides ways to seamlessly enable client side input validation and allows you to dial the knob from basic – all the way up to fully customized validation rules.

Here is an example of a basic form validation which contains the html that we’ll use throughout our examples. One very important thing to note is that the “name” attribute is required for the validation to work!

<style>label.error { color: #ff0000; }</style>
<script>
    $(document).ready(function() {
        $('#registerForm').validate();
    })
</script>
<!-- validation classes: email, url, date, number, digits, creditcard -->
<form id="registerForm" class="myForm">
    <fieldset>
        <legend>Register</legend>
        <p>
            <label for="firstName">First Name:</label>
            <input id="firstName" type="text" name="firstName" class="required" title="Please enter a first name" />
        </p>
        <p>
            <label for="lastName">Last Name:</label>
            <input id="lastName" type="text" name="lastName" class="required" />
        </p>
        <p>
            <label for="email">Email:</label>
            <input id="email" type="text" name="email" class="required email" />
        </p>
        <p>
            <label for="acctNum">Account Number:</label>
            <input id="acctNum" type="text" name="acctNum" class="required number" />
        </p>
        <p>
            <label for="password">Password:</label>
            <input id="password" type="password" name="password" />
        </p>
        <p>
            <label for="cfmPassword">Confirm Password:</label>
            <input id="cfmPassword" type="password" name="cfmPassword"/>
        </p>
    </fieldset>
    <input type="submit" value="Submit" />
</form>

This is setup so when you hit the “Submit” button, it will first validate the form like so:

initial

But what if you’re posting the form via Ajax and also have multiple forms (possibly added dynamically) that all needs to be validated.  Well, you could loop through all of the forms and initiate validation manually:

<script>
    $(function() {
        $('#btnAjaxSubmit').click(function () {
            var pass = true;
            $(".myForm").each(function(index, form) {
                if (!$(form).valid()) { 
                    pass = false;
                }
            });

            if (pass) {
                alert("Ajaxed!");
            } else {
                 alert("Validation Failure!!!!");
            }
        });
    });
</script>

One more thing you could do is change the default error messages.  And if you’ve applied only one validation class (e.g., required) for an input, then simply add a title attribute to the input field and jQuery validation will use that as the error message:

<input id="firstName" type="text" name="firstName" class="required" title="Please enter a first name" />

But lets say if you have both “required” and “email” applied to an input, then the message that you put in the title attribute wont make much sense, since you probably want to display different error messages to let the user know what is required.  Here is an example for doing just that:

$('#registerForm').validate({
            rules: {
                password: {
                    required: true
                },
                cfmPassword: {
                    equalTo: "#password"
                }
            },
            messages: {
                email: {
                    required: 'Please enter your email address',
                    email: 'Not a valid email address'
                }
            } 
        });

The messages section is where you would define custom error messages for each type of validation that you’ve applied to the input, and it is labeled using the “name” attribute of the input field.  In the example above I’m also applying validations using the rules section.  If you don’t like cluttering up your html inputs with different class names, then this is a route that you could take.  And here is our final form:

customValidation

One other option is to extend the jQuery validator messages to include your own:

jQuery.extend(jQuery.validator.messages, { required: “custom required message”,  number: “custom message”});

References: API Documentation

Why Static methods are bad for UnitTest?

The whole idea behind unit testing is to isolate the system under test (SUT) so you can control all the variables and be able to simulate different scenarios to see how your system behaves.  The isolation process usually involves initializing a fake/stub instance and injecting it into the SUT.  However with statics, there is no instance to initialize so there is nothing to mock.  Your class is tightly bound to that static method and there is no way to free it and isolate the system without doing extra work or using specialized mocking frameworks.  (On a side note, I don’t think using these specialized mocking framework is a good idea either, because they promote bad software design.  If you follow SOLID design principles then unit testable architecture comes naturally).

Lets look at an example where the method I want to test calls out to some static method and see what problems it creates.

applydiscountPNG

Lets assume that we want to verify that the 15% discount is being applied correctly.  We need a way for the static Validate() method to return true in order to get to the discount calculation logic.  You could extract the call to the static method into a protected virtual method in order to introduce seam, which we can use during testing, but that’s writing more code which we are trying to avoid because chances are most developers wont even bother.  You could go into the static Validate() method and try to figure out what constitutes a valid accountId.

static

In this case it’s not that complicated either – I’ll just pass in 5004 as the accountId in my test, so what’s wrong.  Well, what you have done is made your test very brittle, because the ApplyDiscount() test could fail if anything changes in the static Validate() method.  Imagine the static Validate() method internally called some other static method, now you are just going deeper and deeper into the rabbit hole and you might never get out.  Also there is no way to write unit test for static methods.  So please stop writing functional utility methods and embrace object oriented programming, it’s about time don’t you think!

Random C# and CLR thoughts

– Overflow checking is Off:

CLR has different instructions based on whether to perform overflow checking: add, subtract, multiply and its corresponding add.ovf, subtract.ovf and multiply.ovf.  To improve performance it does not check for overflow when performing calculations on primitive types (except System.Decimal, which we will get to later).  So, if you do not want overflows to happen in your system then wrap your statement in C# checked operator which will force the CLR to use the xxx.ovf instruction and throw OverflowException if it finds one.  Now going back to the Decimal data type, the checked/unchecked operator has no effect on it and is simply ignored (Decimal always throw an OverflowException if the operation can’t be performed safely).  On other note, Decimal is not considered a primitve data type by the CLR, which means that manipulating it will be slower than primitive values since it does not have IL instructions built into it.

overflowPNG

 

– Implicit/explicit conversion operators – what..?

Simply put, these convert an object from one type to another type either implicitly or explicitly.  I prefer the ‘explicit’ operator and force the developer to make a conscious decision on what the heck they are doing (by applying the cast operator).  Example is as follows:

implicit_explicit

In either case (implicit/explicit) the compiler will always explicitly apply the cast operator, which we can see here

compiler casting

 

– Layout.Auto/Layout.Sequential attributes

The Auto attribute tells the CLR that it is ok to rearrange fields in memory and group them in certain ways to improve performance, whereas the Sequential attribute says not to mess with the programmer defined field order.  By default Layout.Auto is applied to Reference types and Layout.Sequential to Value types, because the value type field ordering might be important when dealing with unmanaged code.  However, if you know that your code won’t have to integrate with unmanaged code you can apply the Layout.Auto attribute to your value types in order to allow the CLR to optimize your code.

LayoutKind

 

– C# is and as operators

They are different from an explicit cast operator in that both of these operators does not throw an exception. is returns boolean and as performs the cast if applicable or returns null, blah, blah… stuff we all know.  However, the second code snippet is more performant no matter how small, so pay attention when you code.

Inheritance_bad

inheritance_good

 

– More random thoughts

Value types will never throw NullReferenceException because a value type variable isn’t a pointer to some reference on the managed heap, it is actually allocated on the thread stack itself (no GC as well).

Value types derive from System.ValueType which derives from System.Object.

Enum types derive from System.Enum which derives from System.ValueType.

So, as they say – System.Object is the mother of all freak’n objects – enough said Smile