Dec 20, 2010

Git finally

After using Git for a few months with GitHub, I am finally loving it and setup Git server using IIS 7 and git-dot-aspx. I finally feel the force is with me now.


Source control is more important and more complicated I thought many years ago. Over the years of programming, I understand more about it. I used cvs, vss, svn, tfs and now finally Git. I have seen many other people's discussions of version control, I also have many discussion with other people. But the views are so different, it is like for a Christian to convince an atheist to believe there is a god. How can you explain someone who love single checkout in vss that multi checkout is better in vss, how can you explain someone who love vss that svn is better, and how can you explain to someone who love svn that Git is the best. I have seen a IT manager said that "I never seen enterprise software can been successfully implemented without exclusive checkout". It is not worthy for such kind of discussion. But semantics remains the same, we just try to get a tool to satisfy our need, make our work more productive, make us in control. But if we don't have that kind of need, or we don't know there is such kind need, or we don't have time to need, it is very hard to accept some new concept. Git is specially designed to make you feel less intelligent than you think you were. Learn it.

Dec 7, 2010

"this" in javascript

If you don't know javascript is a functional language and you do lots of object oriented programming, the follow code must be very confusing for you.

var name = "Jerry";

var x = { name : "Tom",
          sayName1 : function () {
              alert(this.name);
           },
          sayName2 : function () {
             sayName3();
           },
           sayName4 : function () {
            sayName3.call(this); 
          }
        };

var sayName3 = x.sayName1;  

x.sayName1(); //show Tom //line a
sayName3();   //show Tom? no, it is Jerry //line b 
x.sayName2(); //show Tom? no, it is still Jerry!! //line c
x.sayName4();  //now it is Tom //line d

In OO language like C#, java, instance method belongs to an instance(object). The method knows "this" is referring the instance it belongs to. So if you use this concept to apply to javascript, you will think the behavior line a make sense. Line b will be confusing, and line c and line d is even confusing. In javascript, object can reference function, however functions don't belong to any object, and a function does not know what "this" is, until is called in one of the following case. Let's read the follow code


var x = new ConstructorFunction();
var y = simpleCallFunction();
var z = o.memberCallFunction();
var a = usingapplyCallFunction.apply(o, [p1, p2]);
var b = usingcallCallFunction.call(o, p1, p2);

In line y, the "this" in simpleCallFunction always refer to global object. In line a,b, they are basically the same,except the syntax, which specify the object that "this" is referred to. In line z, we can rationalize at as " var temp = o.membershipCallFunction; temp.call(o); " or just "o.membershipCallFunction.call(o);". Line x is constructor call, the "this" is the object being created.


What the global object is depends on the engine. In browser, it refers the window object. But there are other environment too. Here is how window object is passed in? When a page is loading, javascript engine convert the string in <script /> block to a function, let's say, x, then call x.call(window). That is why all your code in the block knows that "this" is window. In line y, you can rationalize it as " simpleCallFunction.call(window) ". The only special case is line x, which use function as a constructor. So what happens to native javascript object, for example, array. If you want to use push method of an array for a non array method, can you? Yes, you can, in fact, that is why jQuery object works like an array even though it is not an array. Here is trick.


var fQuery = { push: [].push }
fQuery.push("item1");
//it works like an array!!
//push method does not belong to array, you can apply it to any object
alert(fQuery [0]);
alert(fQuery.length);

In conclusion, javascript function does not belong to any object, "this" is parameter passed in implicitly or explicitly during the function is called, "this" is just the same as other parameter. However, the language just provide some misleading syntax (but it is also good syntax) to pass the parameter into a function.

mutable binding and ref type

We all know that in F#, once you bind a value to an identifier, that the value can not be changed. What does this means? It means the your identifier will be like a read only property, the property return a value that is determined at the time of binding. After binding, the identifier looks like a constant. Mutable makes identifier more like a variable. But actually, it is a property with both getter and setter. Let's look at the following code.

let testMutable =
    //let mutable temp = 1
    let temp = 1
    let innerFunction() = 
        printfn "%i" temp
        ()
    innerFunction

let t1 = testMutable
t1()

This code compiles, but if we use mutable temp, then it shows an error like,



The mutable variable 'temp' is used in an invalid way. Mutable variables cannot be captured by closures. Consider eliminating this use of mutation or using a heap-allocated mutable reference cell via 'ref' and '!'.


So why? Identifier defined as mutable are limited in that they can not be used within a inner function like above. But still why? So let's decompile into c# and see that is going on?


public static void main@()
{
    int temp = 1; //because temp cannot be property
    FSharpFunc<Unit, Unit> innerFunction = new Program.innerFunction@161(temp);
    FSharpFunc<Unit, Unit> testMutable = testMutable@158 = innerFunction;
    FSharpFunc<Unit, Unit> t1 = t1@165 = Program.testMutable;
    Program.t1.Invoke(null);
}

[CompilationMapping(SourceConstructFlags.Value)]
public static FSharpFunc<Unit, Unit> testMutable
{
    get
    {
        return $Program.testMutable@158;
    }
} 

[Serializable]
internal class innerFunction@161 : FSharpFunc<Unit, Unit>
{
    // Fields
    public int temp;

    // Methods
    internal innerFunction@161(int temp) //innefunction accept value from parent function as constructor parameter
//it looks like a read/write property is not sufficient enough to support the 
// closure capture, 
    {
        this.temp = temp;
    }

 public override Unit Invoke(Unit unitVar0)
 {
        FSharpFunc<int, Unit> func = ExtraTopLevelOperators.PrintFormatLine<FSharpFunc<int, Unit>>(new PrintfFormat<FSharpFunc<int, Unit>, TextWriter, Unit, Unit, int>("%i"));
        int temp = this.temp; //copy the member to local variable
        func.Invoke(temp);
        return null;
    }
}

Let's change it to ref, as the error message suggested, and let's add some code to change the value of identifier because we can, then decompile it.


let testMutable =
    let temp = ref 1
    let innerFunction() = 
        temp := !temp  + 1
        printfn "%i" !temp
        ()
    innerFunction

let t1 = testMutable
t1() //print 2
t1() //print 3

public static void main@()
{
    FSharpFunc<Unit, Unit> innerFunction = new Program.innerFunction@161(Operators.Ref<int>(1));
    FSharpFunc<Unit, Unit> testMutable = testMutable@158 = innerFunction;
    FSharpFunc<Unit, Unit> t1 = t1@166 = Program.testMutable;
    Program.t1.Invoke(null);
    Program.t1.Invoke(null);
}

[CompilationMapping(SourceConstructFlags.Value)]
public static FSharpFunc<Unit, Unit> testMutable
{
    get
    {
        return $Program.testMutable@158;
    }
}
 
[Serializable]
internal class innerFunction@161 : FSharpFunc<Unit, Unit>
{
    // Fields
    public FSharpRef<int> temp;

    // Methods
    internal innerFunction@161(FSharpRef<int> temp)
    {
        this.temp = temp;
    }

    public override Unit Invoke(Unit unitVar0)
    {
//we can change the value, because it is ref Type(f#),

        Operators.op_ColonEquals<int>(this.temp, Operators.op_Dereference<int>(this.temp) + 1);

        FSharpFunc<int, Unit> func = ExtraTopLevelOperators.PrintFormatLine<FSharpFunc<int, Unit>>(new PrintfFormat<FSharpFunc<int, Unit>, TextWriter, Unit, Unit, int>("%i"));
//get the value from ref Object using op_Dereference operator       
int num = Operators.op_Dereference<int>(this.temp); 
        func.Invoke(num);
        return null;
    }
}

We can say to implement the closure feature (inner function can memorize mutable value from parent function), simple mutable value is not robust enough to support that. We need a ref Type. The ref type here is not the reference type we talk about in CLR or C#. The ref Type is generic record type. We can mimic ref implementation as follow, here we rename ref as wrapper.


type wrapper<'a> = { mutable innerValue: 'a }

let testref =
    let x = { innerValue = 1 }
    let innerFunction() =
        x.innerValue <- x.innerValue + 1
        printfn "%i" x.innerValue
    innerFunction

let f = testref
f()
f()

//decompiled into 
public static void main@()
{
    Program.wrapper<int> x = new Program.wrapper<int>(1);
    FSharpFunc<Unit, Unit> innerFunction = new Program.innerFunction@138(x);
    FSharpFunc<Unit, Unit> testref = testref@135 = innerFunction;
    FSharpFunc<Unit, Unit> f = f@142 = Program.testref;
    Program.f.Invoke(null);
    Program.f.Invoke(null);
}

[Serializable, CompilationMapping(SourceConstructFlags.RecordType)]
public sealed class wrapper<a> : IEquatable<Program.wrapper<a>>, IStructuralEquatable, IComparable<Program.wrapper<a>>, IComparable, IStructuralComparable
{
    // Fields
    [DebuggerBrowsable(DebuggerBrowsableState.Never)]
    public a innerValue@;

    // Methods
    public wrapper(a innerValue);
    [CompilerGenerated]
    public sealed override int CompareTo(Program.wrapper<a> obj);
    [CompilerGenerated]
    public sealed override int CompareTo(object obj);
    [CompilerGenerated]
    public sealed override int CompareTo(object obj, IComparer comp);
    [CompilerGenerated]
    public sealed override bool Equals(Program.wrapper<a> obj);
    [CompilerGenerated]
    public sealed override bool Equals(object obj);
    [CompilerGenerated]
    public sealed override bool Equals(object obj, IEqualityComparer comp);
    [CompilerGenerated]
    public sealed override int GetHashCode();
    [CompilerGenerated]
    public sealed override int GetHashCode(IEqualityComparer comp);

    // Properties
    [CompilationMapping(SourceConstructFlags.Field, 0)]
    public a innerValue { get; set; }
}

 
[Serializable]
internal class innerFunction@138 : FSharpFunc<Unit, Unit>
{
    // Fields
    public Program.wrapper<int> x;

    // Methods
    internal innerFunction@138(Program.wrapper<int> x)
    {
        this.x = x;
    }

    public override Unit Invoke(Unit unitVar0)
    {
        this.x.innerValue = this.x.innerValue@ + 1;
        FSharpFunc<int, Unit> func = ExtraTopLevelOperators.PrintFormatLine<FSharpFunc<int, Unit>>(new PrintfFormat<FSharpFunc<int, Unit>, TextWriter, Unit, Unit, int>("%i"));
        int num = this.x.innerValue@;
        return func.Invoke(num);
    }
}

Of course, ref provide "!" operator instead of v.innerValue, and ":=" operator in place of "v.innerValue <- x", it is much more elegant. Once we understand ref, we can use it to create function using closure, which is really really powerful.

How Binding in f# is like in CLR

In c# or clr philosophy, everything is object, object is the horsepower. In f# philosophy, everything is value, function is the horsepower. However f# is implemented in CLR, I am just curious how F# is implement in CLR, so I use reflector to decompile the generated code. However, I don't think reflector is not very accurate here. To really understand , it is better is use ILDASM.exe.

let m1 = 1 // convert to readonly property
//public static int m1
//{
//    [CompilerGenerated, DebuggerNonUserCode]
//    get
//    {
//        return 1;
//    }
//}

let mutable m2 = 1 //get and set property
//[CompilationMapping(SourceConstructFlags.Value)]
//public static int m2
//{
//    get
//    {
//        return $Program.m2@174;
//    }
//    set
//    {
//        $Program.m2@174 = value;
//    }
//}
 
 
let m3() = 1 //method return 1
//public static int m3()
//{
//    return 1;
//}
 

let m4 x = x + 1  //method accept one parameter
//public static int m4(int x)
//{
//    return (x + 1);
//}

 
let m5 (x) = x + 1 //method accept one parameter, same as m5
//public static int m5(int x)
//{
//    return (x + 1);
//}

let m6 x y = x + y + 1 //a method that accept two parameter x, y, but with a attribute CompilationArguementCounts
//[CompilationArgumentCounts(new int[] { 1, 1 })]
//public static int m6(int x, int y)
//{
//    return ((x + y) + 1);
//}


let m7 (x, y) = x + y + 1 //a method that accept one parameter, which is a turple, compiled into normal function
//public static int m7(int x, int y)
//{
//    return ((x + y) + 1);
//}

let m8 = m6 1 //partial function, compiled to property that returns a delegate, which is type of FSharpFunc<int, int>
//CompilationMapping(SourceConstructFlags.Value)]
//public static FSharpFunc<int, int> m8
//{
//    get
//    {
//        return $Program.m8@180;
//    }
//}

//[Serializable]
//internal class m8@180 : FSharpFunc<int, int>
//{
//    // Fields
//    [DebuggerBrowsable(DebuggerBrowsableState.Never), CompilerGenerated, DebuggerNonUserCode]
//    public int x;
//
//    // Methods
//    internal m8@180(int x)
//    {
//        this.x = x;
//    }
//
//    public override int Invoke(int y)
//    {
//        return Program.m6(this.x, y);
//    }
//}


let m9 y = m7 (1, y) //compiled to a method, that call a m7 with 1 and y
//public static int m9(int y)
//{
//    return m7(1, y);
//}

let turple1 = (1, 2)
//[CompilationMapping(SourceConstructFlags.Value)]
//public static Tuple<int, int> turple1
//{
//    get
//    {
//        return $Program.turple1@268;
//    }
//}
 
// [DebuggerBrowsable(DebuggerBrowsableState.Never)]
//internal static Tuple<int, int> turple1@268;
//

 
let m10 = m7 turple1 
//[CompilationMapping(SourceConstructFlags.Value)]
//public static int m10
//{
//    get
//    {
//        return $Program.m10@281;
//    }
//}
// 
//[DebuggerBrowsable(DebuggerBrowsableState.Never)]
//internal static int m10@281;
// 

Nov 17, 2010

Pattern matching shorthand in function definition

Pattern matching is powerful construct in F#, but sometime it is confusing to beginner, below is pattern matching shorthand in function definition. The paramter is implied in the matching pattern, essentially you can rewrite it as below.


let listOfList = [[2; 3; 5]; [7; 11; 13]; [17; 19; 23; 29]]

let rec concatList =
    function head :: tail -> head @ (concatList tail)
            | [] -> []

//let rec concatList l =
//    match l with
//    | head :: tail -> head @ (concatList tail)
//    | [] -> []

let primes = concatList listOfList;

printfn "%A" primes

Nov 15, 2010

benchmark you javascript

Here is script that is used to benchmark jQuery.

// Runs a function many times without the function call overhead
function benchmark(fn, times, name){
 fn = fn.toString();
 var s = fn.indexOf('{')+1,
  e = fn.lastIndexOf('}');
 fn = fn.substring(s,e);
 
  return benchmarkString(fn, times, name);
}

function benchmarkString(fn, times, name) {
  var fn = new Function("i", "var t=new Date; while(i--) {" + fn + "}; return new Date - t")(times)
  fn.displayName = name || "benchmarked";
  return fn;
}

Nov 13, 2010

jQuery object is an array like object

jQuery object is not an Array object, but it looks like an array. The following code how this is implemented?

var o = {"0":1, "1": 2, length:2};
var a = [].slice.call(o, 0);
alert(a); // 1, 2


//or you can do this
var o = {}
o[0] = 1;
o.length = 1;
o[1] = 2;
o.length = 2;
var a = [].slice.call(o, 0);
alert(a); // 1, 2

//or you can do this
var o = {};
[].push.call(o, 1);
[].push.call(o, 2);
var a = [].slice.call(o, 0);
alert(a); // 1, 2

object toString

The memeber toString of different object is redefined in their prototype, for example, Object.prototype.toString is different from Array.prototype.toString, to apply a Object.prototype.toString to an array object, we can write the following code

var toString = Object.prototype.toString;
alert(toString.call([1, 2])); //[object Array]
alert([1, 2].toString()); //1,2

This will return the type of the object, "[object Array]"

using each function over "for" construct

jquery.each

jQuery.each("Boolean Number String Function Array Date RegExp Object".split(" "), function(i, name) {
class2type[ "[object " + name + "]" ] = name.toLowerCase();
});



Oct 30, 2010

Semantics only works in a context

I am a believer of semantics. What is why name the domain as semanticsworks.com. But let me take a step back to explain what I mean semantics here. You may find the definition in Wikipedia, but what I mean semantics here is the true intention or need to do something. For example, when I say " I need a car to go to work", the true intention is "I need get to work", "a car" is just a means, or an implementation. If I work at home, I wouldn't need a car at all. As a software developer, I can easily apply semantics into programming. For example, I would prefer writing semantic html rather mix presentation html, I would focus on abstraction(interface) rather on implementation(class) and so on. When I study a a new technology, a new programming language, I will first think what problem it is trying to solve, then I focus how it solves the problem more efficiently and elegantly. When I want to propose my solution or design to my client, I would raise the what the existing problem is, and how my solution solve the problem in a better way. Semantics seems to work. But one important thing shouldn't be forgotten, that is a context. Here is what javascript guru Douglas Crockford said in his Loopage presentation


A little while ago I was talking to a friend of mine — a really bright guy, one of the smartest programmers I know – about what we should do next with JavaScript. I suggested to him that we should get the tail recursion thing going, we should get that fixed. Why do we want to do that? Well, I said, among a lot of other things it would allow us to do continuation style passing. I think that would be a useful option for us to be able to provide within the language, and if we don't optimize the tail calls then we don't get that. His answer was: I've never used continuation passing, so I really don't see the value of it, which I immediately recognized as a really stupid answer.

The way I was able to recognize it so fast is that I have used that same argument myself, and I've been hearing that same argument throughout my entire career. Basically, the core of that argument is: "I'm not qualified to make a decision about that. The onus is on you to educate me deeply about this thing that I'm not even interested in." There's no way to overcome that kind of requirement, nobody can win that argument. But it turns out that usually that reasoning is wrong. I've heard that argument about why we shouldn't have to worry about closure. I've heard it about why we shouldn't use recursion. I've heard it about why punch cards are better than timesharing. You can go all the way back to 'it's better for us to be programming with digits, I don't understand why we need compilers'. It's been going on from the beginning. That's why software development is so slow, because basically we have to wait for a generation to die off before we can get critical mass on the next good idea.

Semantics does not always works as we expect. Seemingly, Crockford forgot his friend's context. He should have let his friend buy in his context in the first place. When you try to propose a solution to solve a problem, which your client does not think as a problem, or does not see a need to solve it immediately, then your semantics will not work in your client's context. So here is what you can do.


  1. Think in the context of your client, don't propose a solution to solve a problem that your client has not interest in solving, only propose the solution in your client's context
  2. Think in the context of your client, guide your client to think in your context and make him believe that it is a problem, then propose your solution. Sometimes, this can be very hard, if the contexts collide heavily.
  3. Ignore your client and move on

There are lots new technologies coming, like Domain Specific Language, Cloud computing, Service Oriented Architecture etc. How soon they will be adopted will depends on how people can accept the contexts in which their designer think, and how soon people can accept these context will somewhat depend on the result the early adopters achieve.


A friend of mine asked me recently, what versioning control system should be used. I said "Git". He asked why? I said it is distributed versioning control system and it is scalable. Then He said, "We don't need it to be distributed.". You know, I made the same mistake, I lost the context of friend.

Oct 14, 2010

Stop fighting

JavaScript is a amazing language, and it continues to amaze me. But I learned it in the hard way because I was trying to translate it into classic OO concept, and I failed. Douglas Crockford is my JavaScript idol, here is what he says how he finally understands JavaScript.

Eventually I figured out what was going on. After a lot of struggle, I eventually figured out that it was a functional language, and at that moment I stopped fighting it. I remember when that moment occurred, I was bicycling. I had just read the ECMAScript standard, which is a really difficult thing to understand. But I read through it, and then I had this epiphany when I was miles away from a computer. Oh, it's got functions in it, there are lambdas — I can do this now. It completely changed the way I thought about the language. In the end, the story ended successfully. We finished on time and on budget, Turner shipped it, and everything went great.

The lessen that I learned is to stop fighting against new idea with my old knowledge. New idea may be odd, but it will be appreciated if you follow along, and it expresses similar semantics in the way you never thought of. I love the philosophy of Chinese philosopher Zhangzi, here are two pieces of excerpt that I feel helpful to understand new idea.


The Ruler of the Southern Ocean was Shu, the Ruler of the Northern Ocean was Hu, and the Ruler of the Centre was Chaos. Shu and Hu were continually meeting in the land of Chaos, who treated them very well. They consulted together how they might repay his kindness, and said, 'Men all have seven orifices for the purpose of seeing, hearing, eating, and breathing, while this (poor) Ruler alone has not one. Let us try and make them for him.' Accordingly they dug one orifice in him every day; and at the end of seven days Chaos died.

Paoding was cutting up an ox for the ruler Wen Hui. Whenever he applied his hand, leaned forward with his shoulder, planted his foot, and employed the pressure of his knee, in the audible ripping off of the skin, and slicing operation of the knife, the sounds were all in regular cadence. Movements and sounds proceeded as in the dance of 'the Mulberry Forest' and the blended notes of the King Shou.' The ruler said, 'Ah! Admirable! That your art should have become so perfect!' (Having finished his operation), the cook laid down his knife, and replied to the remark, 'What your servant loves is the method of the Dao, something in advance of any art. When I first began to cut up an ox, I saw nothing but the (entire) carcase. After three years I ceased to see it as a whole. Now I deal with it in a spirit-like manner, and do not look at it with my eyes. The use of my senses is discarded, and my spirit acts as it wills. Observing the natural lines, (my knife) slips through the great crevices and slides through the great cavities, taking advantage of the facilities thus presented. My art avoids the membranous ligatures, and much more the great bones. A good cook changes his knife every year; (it may have been injured) in cutting - an ordinary cook changes his every month - (it may have been) broken. Now my knife has been in use for nineteen years; it has cut up several thousand oxen, and yet its edge is as sharp as if it had newly come from the whetstone. There are the interstices of the joints, and the edge of the knife has no (appreciable) thickness; when that which is so thin enters where the interstice is, how easily it moves along! The blade has more than room enough. Nevertheless, whenever I come to a complicated joint, and see that there will be some difficulty, I proceed anxiously and with caution, not allowing my eyes to wander from the place, and moving my hand slowly. Then by a very slight movement of the knife, the part is quickly separated, and drops like (a clod of) earth to the ground. Then standing up with the knife in my hand, I look all round, and in a leisurely manner, with an air of satisfaction, wipe it clean, and put it in its sheath.' The ruler Wen Hui said, 'Excellent! I have heard the words of my cook, and learned from them the nourishment of (our) life.'

Oct 4, 2010

Static class vs Singleton

Design pattern question is often asked in interview for developer. In an interview, I was asked to describe one design pattern that I am familiar, except singleton. Maybe the interviewer think that Singleton is too easy to answer. Yes singleton is a very simple, a sample is as follow.


class Program
{
 static void Main(string[] args)
 {
  Printer.Instance().Print();
 }
}

class Printer
{
 static Printer _printer;

 public static Printer Instance()
 {
  if (_printer == null)
  {
   _printer = new Printer();
  }
  return _printer;
 }

 protected Printer()
 { }

 public void Print()
 {
  Console.WriteLine("printing...");
 }
}

Although Singleton pattern is simple, it can be also used to test applicant's understanding of object. Let's say, if someone writes the following code some code argues that design pattern is useless, structured procedure is better. In some case, structured procedure is just as good. Can you write some code demonstrate in what scenario Singleton solve problem that static method cannot solve? (Don't think of mullti-threading, it is not an issue here.)


class Program
{
 static void Main(string[] args)
 {
  Printer.Print();
 }
}

static class Printer
{
 public static void Print()
 {
  Console.WriteLine("printing...");
 }
}


My answer

Although two solutions look similar, but it reflects different thinking. One of book affect me most in my programming career is Object Thinking. In this book, it says


The essential thinking difference is easily stated: “Think like an object.” Of course, this statement gets its real meaning by contrast with the typical approach to software development: “Think like a computer.” Thinking like a computer is the prevailing mental habit of traditional developers.

The singleton solution is reflection of "think like an object". When you think like an object, you are also an object, the other objects will be your buddies. You will interact with printer buddy by his interface. As long as your buddy expose the your printer interface, you know how to communicate with him. It doesn't matter who your buddy is, what matters is you know what kind of service your buddy provide you. You scenario will be, I see a Printer guy, he is the only printer guy, I don't care who he is, but he says he can print, so I ask him, "Print, please". If you think this way, you can write the following code. This is fundamental feature of object-oriented technique, polymorphism.


class Program
{
 static void Main(string[] args)
 {
  Printer.Instance().Print();
 }
}

abstract class Printer
{
 static Printer _printer;

 public static Printer Instance()
 {
  if (_printer == null)
  {
    Type pritnerType = GetPrinterTypeFromConfiguration();
    _printer = Activator.CreateInstance(printerType) as Printer;
  }
  return _printer;
 }
  public abstract void Print();
}

class LaserJetPrinter : Printer
{
   public virtual void Print()
   {
      Console.WriteLine("hhhhhhhhhhhh");
    }
}

class InkJetPrinter : Printer
{
   public virtual void Print()
   {
     Console.WriteLine("kakaka");
    }

}

If you think like a machine, your mindset will be like, I am the master of the printer, I want feed it with some instructions. Ok, I have menu of the machine, one of instruction is "print", let me feed it, and it prints. If you think like this, you will write the static method like above. This is not necessary bad practice, in fact it is even the best practice(please check CA1822: Mark members as static, if you never want to have differently print behavior. Until then You have much less flexibility, and OO is your friend.

Sep 4, 2010

Defensive programer , code analysis, and code review.

I was born in China, English is not my native language, and I still have difficulties in writing in English. Once, I sent an email to my boss. In his reply he highlighted my typo and grammatical error. At first I felt embarrassed, but I immediately appreciated his effort of doing so. I am sure sometimes my email is confusing, but nobody has ever done that to me before. I guess it is because they don't want to hurt my feeling or save the time to correct me. I do use spelling check and grammar check functions of email app, and I never felt embarrassed, isn't that strange?


I have been a programmer for years, I have made all the programming mistakes that can be made, and I still make, but less. Some developers correct my error, some don't. I felt embarrassed in early time, but I gradually accept the fact that my code sucks and appreciate their effort. Compilers also correct my mistake, I was frustrated in earlier time, but I never felt embarrassed by compilers.


As I got more experience, I found that it was never easy to tell my fellow developers about their mistakes in coding or design. I was working in a software company. One of the senior developer resigned for a new job and my boss asked me to take over his project which I never touched. Firstly, I reviewed his code, I felt sick and wondered how a senior developer can write such crap. During the later knowledge transfer, I asked lots of critical question, I knew I hurt his feeling and he was unhappy. Personally, I think he is nice and funny guy, and I regret about that. From then, I try to be careful about my words when I express my opinion of others' code. Even then, it is still inevitable to hurt someone's feeling some time, if my opinion is too radical to him.


Is this just my unique experience. In the book Debugging Microsoft .NET 2.0 Applications, author John mentions his experience in Chapter 3, "Assert, Assert, Assert, Assert". He argued with his boss about a section of code, which misused "Assert", and he said "Whoever wrote this needs to be fired! I can't believe we have an engineer on our staff who is this incredibly and completely stupid!". His boss got very quiet, grabbed the paper out of his hands, and quietly said, "That's my code." . And John resigned from the company later.


Although it is not so new book, I find that it is still very useful. The author discusses some proactive tools to improves code quality, one of them is code analysis, and a chapter 8 is dedicated to topics "Writing Code Analysis Rules". I think code analysis is quite effective, because no matter how defensive you are as developer, you seldom can be embarrassed by a machine. Machine always reports the warning or error if you break the rules.


For a while, I suspected the effectiveness of code review. My previous experience tells me that developers tend to be defensive for themselves. Why? If there is a large gap in coding quality and experience between the author and the expectation of the reviewer, the reviewer may ask the question like "How can a senior developer write such crap?", this make the author looks incompetent, so it is natural to for him to be defensive. In such case, code review will not be necessary. Maybe it will be more effective for the company to send the author for some crash course to close the gap, or the recruitment process needs to be reviewed to to find out why this gap is not caught in the first place. If the gap is small, generally, the code review is powerful software quality tool. It has been adopted by many good software companies. It is said that some company go to the length that code developed by junior developer can not be merge into trunk until it has been reviewed by senior developer. I am not sure if that is true, but code review not only improve quality but also transfers knowledge, and it can become an company culture that attract people. However we developers are still human, we should be very clear that, code review is review of code, but not performance review of employee. And we should not use words that target for people but not code. One senior developer once reviewed my code and said "You don't understand what object oriented programming is." I was upset for a while. Am I too vulnerable? Maybe, I am a human.

Aug 27, 2010

Import msbuid files

The import construct is the re-usability mechanism in msbuild. When an import element is encountered, these steps take place


  1. The working directory is changed to that of the imported project file.

  2. Project element attributes are processed.

    If a value is already assigned to DefaultTargets attribute, then this is ignored, otherwise it becomes the value of DefaultTargets. If InitialTargets attribute is present, then the list of targets will be appended to the current list of InitialTargets.



  3. Project element nodes are processed.

  4. The working directory returns to importing the previous value.

DefaultTargets and InitialTargets

DefaultTargets attribute is a list of targets to execute if no target is specified. The InitialTargets attribute is a list of tarets to be executed before any other targets. In the following example, the i1, i2, t1, t2 will be run in order

<?xml version="1.0" encoding="utf-8" ?>
<Project ToolsVersion="3.5" DefaultTargets="t1;t2" InitialTargets="i1;i2" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <Target Name="t1">
    <Message Text="t1 is running" />
  </Target>
  <Target Name="t2">
    <Message Text="t2 is running" />
  </Target>
  <Target Name="i1">
    <Message Text="i1 is running" />
  </Target>
  <Target Name="i2">
    <Message Text="i2 is running" />
  </Target>
</Project>
   

Aug 26, 2010

property and item in msbuild

In msbuild, there are two way to save variable information, they are property and item. Here is an example how to define properties and items.


<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Release|AnyCPU' ">
    <DebugType>pdbonly</DebugType>
    <Optimize>true</Optimize>
    <OutputPath>bin\Release\</OutputPath>
    <DefineConstants>TRACE</DefineConstants>
    <ErrorReport>prompt</ErrorReport>
    <WarningLevel>4</WarningLevel>
    <CodeAnalysisRuleSet>AllRules.ruleset</CodeAnalysisRuleSet>
  </PropertyGroup>
  <!--reference items-->
  <ItemGroup>
    <Reference Include="System" />
    <Reference Include="System.Core">
      <RequiredTargetFramework>3.5</RequiredTargetFramework>
    </Reference>
    <Reference Include="System.Xml.Linq">
      <RequiredTargetFramework>3.5</RequiredTargetFramework>
    </Reference>
    <Reference Include="System.Data.DataSetExtensions">
      <RequiredTargetFramework>3.5</RequiredTargetFramework>
    </Reference>
    <Reference Include="System.Data" />
    <Reference Include="System.Xml" />
  </ItemGroup>


 <PropertyGroup>
   <Pfile>Program.cs;Msbuild.xml</Pfile>
 </PropertyGroup>


But they have different usage. Normally property defines a single variable, if you redefine it, the old value will be overwritten, but Property can be also used as item list, if you want. For Item, it is a list, so you can define an item multiple times, each definition will add one item into the the list.
You can also pass property from the msbuild command line, like the following


MSBuild MyApp.csproj /t:Clean
                     /p:Configuration=Debug;TargetFrameworkVersion=v3.5

It is also different in the way to reference property and items. Here is an example how they are referenced.




 <Message Text="Pfile:@(MyFile->'%(FullPath)')" />

<!--Referencing a property -->
<Message Text="SchemaVersion: $(SchemaVersion)" />

<!-- Referenceing an item's metadata FullPath
 <Message Text="MyFile.FullPath: @(MyFile->'%(FullPath)')" />

<!--Referencing an item -->
 <Message Text="Reference Items: @(Reference)" />

<!--Referencing an item's medata data -->
 <Message Text="Reference Items: @(Reference->'%(RequiredTargetFramework)')" />





We can use three wildcard elements(?, *, **) to defined items, for example


<ItemGroup>
<MyFile Include="Program.cs;Msbuild.xml" />
<MyFile Include="*.doc" />
<MyFile Include="src\**\*.doc" />
<MyFile Include="**\*.cs" />
</ItemGroup>

After an property is defined, it can be also referenced in a new definition.


<ItemGroup> <AppConfigFileDestination Include="$(OutDir)$(TargetFileName).config"/>
</ItemGroup>

There are some special property and item medtadata. For property we call it reserved property, for more information see http://msdn.microsoft.com/en-us/library/ms164309.aspx. For item, we call it well-known item metadata, for more information see http://msdn.microsoft.com/en-us/library/ms164313.aspx


The "$" can also work with enviroment variables.


<Target Name="PrintSystemPath"> <Message Text="Path: $(Path)"/>
</Target>

Aug 5, 2010

Application Type in silverlgiht

When you create an silverlight app, by default a App.xaml file is created, and by default the class behind is the class or EntryPointType. This information is saved in the AppManifest.xml, so that when the xap file is downloaded, silverlight runtime will create an instance of that class. This class should derive from Application class. After the the application class is instantiated by the runtime, the most important job need to do in the constructor is to set the RootVisual property. You can do in code, or you can do it by loading a a xaml file. Visual studio use the second approach. And here is the pattern. The xaml file automatically create some code behind, which run the following code.


public partial class App1 : Application
    {
        private bool _contentLoaded;

        /// 
        /// InitializeComponent
        /// 
        [System.Diagnostics.DebuggerNonUserCodeAttribute()]
        public void InitializeComponent()
        {
            if (_contentLoaded)
            {
                return;
            }
            _contentLoaded = true;
            System.Windows.Application.LoadComponent(this, new System.Uri("/HelloWorldSilverlight;component/App1.xaml", System.UriKind.Relative));
        }


        public App1()
        {
            this.Startup += this.Application_Startup;
            this.Exit += this.Application_Exit;
            this.UnhandledException += this.Application_UnhandledException;

            InitializeComponent();
        }
}

XAML is faster than code in silverlight

Parsing XAML is faster than intantiating objects from code, because XAML parser does not create the API objects you use to interact with elements, but instead only create an internal representation. After you start interacting with elements from code, silverlight creates api objects that will slow down your application.

Jul 6, 2010

an another way to do setInterval

In window object, there is a function setInterval which allows your to run a task repeatedly at an interval.


setInterval(doSomething; 100);

However, if the method last longer than the preset interval, it is not so efficient. We can use the follow function to make it more predictable.


loopTask(doSomething, 100);

function loopTask(fn, interval) {
   (function(){
       fn();
       setTimeout(arguments.callee, interval);
   })();
}

Jul 4, 2010

notes of regular expression in javascript

The simplest way to tell whether a regular expression is find in source string is to use the "test" method.


var reg = /a/;
var found = reg.test("abc");
console.log(found);

In lots of occasion, we use regular expression to test user's input, for example to test if a input is in date format. You need the "^" and "$" character to wrap the regular expression pattern.


To do a simple search in string, we can use string.match(regex) syntax. This is useful when we do want to whether a match or how many matchs can be found. If you just care about a first match, you will use non global regular expression. In this case, f a match is found, an array object will be return, the first element of the array is the entire match, the 1 to (length -1)th members of the array is the sub matches which are generated by the round bracket "()". The array or match object also has property "index" and "input". When a regular expression search is perform, the RegExp object also get updated.


var src = "Please send mail to george@contoso.com and someone@example.com. Thanks!";

// Create a regular expression to search for an e-mail address.
var re_non_global = /(\w+)@(\w+)\.(\w+)/;
var result = src.match(re_non_global);

for (var n in result)
{
  console.log(n + ":" + result[n]);
}
/*
0:george@contoso.com
1:george
2:contoso
3:com
index:20
input:Please send mail to george@contoso.com and someone@example.com. Thanks!
*/

console.log("RegExp properties");
for(var n in RegExp)
{
  console.log(n + ":" + RegExp[n]);
}

/*
RegExp properties
input:Please send mail to george@contoso.com and someone@example.com. Thanks!
multiline:false
lastMatch:george@contoso.com
lastParen:com
leftContext:Please send mail to
rightContext: and someone@example.com. Thanks!
$1:george
$2:contoso
$3:com
$4:
$5:
$6:
$7:
$8:
$9:
*/

If we care about more than the first match, we need to do a global search, we need global regular expression. When the match object return is also an array, sub-match is ignord. Each element in the array is a single match. The RegExp store the information of the last match.

var re_global = /(\w+)@(\w+)\.(\w+)/g;
// Because the global flag is included, the matches are in
// array elements 0 through n.
var result = src.match(re_global);
for (var n in result)
{
  console.log(n + ":" + result[n]);
}
/*
0:george@contoso.com
1:someone@example.com
*/

console.log("RegExp properties");
for(var n in RegExp)
{
  console.log(n + ":" + RegExp[n]);
}
/*
RegExp properties
input:Please send mail to george@contoso.com and someone@example.com. Thanks!
multiline:false
lastMatch:someone@example.com
lastParen:com
leftContext:Please send mail to george@contoso.com and
rightContext:. Thanks!
$1:someone
$2:example
$3:com
$4:
$5:
$6:
$7:
$8:
$9:
*/


However string.match(regex) is less powerfull than regex.exec(string), which allow you exam each match object interactively, but to do this you need to turn on the global option of regular expression. Each time the exec method is called, it will continue from the position after the last match. Because of this, we can use while loop.


var src = "Please send mail to george@contoso.com and someone@example.com. Thanks!";
var re_global = /(\w+)@(\w+)\.(\w+)/g;

var match;
while(match = re_global.exec(src)){
  console.log("match is found");
//match is an array with two additional index, and input properties
//  for(var i=0, length = match.length; i >length; i++)
//  {
//    console.log(i + ":" + match[i]);   
//  } 
  
  for (var n in match) {
    console.log(n + ":" + match[n]);
  }

  console.log("RegExp properties");
  for(var n in RegExp)
  {
     console.log(n + ":" + RegExp[n]);
  }
}
​/*

match is found
0:george@contoso.com
1:george
2:contoso
3:com
index:20
input:Please send mail to george@contoso.com and someone@example.com. Thanks!
  
RegExp properties
input:Please send mail to george@contoso.com and someone@example.com. Thanks!
multiline:false
lastMatch:george@contoso.com
lastParen:com
leftContext:Please send mail to
rightContext: and someone@example.com. Thanks!
$1:george
$2:contoso
$3:com
$4:
$5:
$6:
$7:
$8:
$9:

match is found
0:someone@example.com
1:someone
2:example
3:com
index:43
input:Please send mail to george@contoso.com and someone@example.com. Thanks!
  
RegExp properties
input:Please send mail to george@contoso.com and someone@example.com. Thanks!
multiline:false
lastMatch:someone@example.com
lastParen:com
leftContext:Please send mail to george@contoso.com and
rightContext:. Thanks!
$1:someone
$2:example
$3:com
$4:
$5:
$6:
$7:
$8:
$9:
*/  



If the global option is not enabled for regular expression, each call to regex.match will start from the beginning of the test string, so that you can not use previous code to do a global search. The match is always the first match.



var src = "Please send mail to george@contoso.com and someone@example.com. Thanks!";

var re_non_global = /(\w+)@(\w+)\.(\w+)/;

var match = re_non_global.exec(src);

​for (var n in match) {
  console.log(n + ":" + match[n]);
}
/*
0:george@contoso.com
1:george
2:contoso
3:com
index:20
input:Please send mail to george@contoso.com and someone@example.com. Thanks!
*/  
  
console.log("RegExp properties");
for(var n in RegExp)
{
   console.log(n + ":" + RegExp[n]);
}

/*
RegExp properties
input:Please send mail to george@contoso.com and someone@example.com. Thanks!
multiline:false
lastMatch:george@contoso.com
lastParen:com
leftContext:Please send mail to
rightContext: and someone@example.com. Thanks!
$1:george
$2:contoso
$3:com
$4:
$5:
$6:
$7:
$8:
$9:
​*/​;

If we want to replace match with our text, we can use str.replace(regexp|substr, newSubStr|function[, Non-standard flags]) method, we also make sure we turn on global option of the global expression, otherwise it will only replace the first match. We can use some special symbols inside newSubStr to do the replacing, for more See this. We can also use a function to return the string dynamically as replacement string. The function parameter is like the following, for more information see here.


//offset is the position of the match, source is 
function replacer($0, $1, $2, .. ,offset, source)
{ return your_new_string;}

Jun 2, 2010

poco fix-up during relation change

I have a poco entity like following. At first I do not have Items navigation property, but it turns out, this property is important. Bascially it makes the following code work.



var countBeforeInsert = _db.Entities.Count();
            //
            Entity parent = new Entity();
            parent.Id = Guid.NewGuid();
            parent.Name = "parent";
            parent.Created = DateTime.Now;
            parent.LastUpdated = DateTime.Now;

            Entity child = new Entity();
            child.Id = Guid.NewGuid();
            child.Name = "child";
            child.Created = DateTime.Now;
            child.LastUpdated = DateTime.Now;
            child.Container = parent;

            _db.Entities.AddObject(parent);
            _db.SaveChanges();
            //
            var countAfterInsert = _db.Entities.Count();
            //
            Assert.Equal(countBeforeInsert + 2, countAfterInsert);
            //
            _db.DeleteObject(parent);
            _db.SaveChanges();
            //
            var countAfterDelete = _db.Entities.Count();
            //
            Assert.Equal(countAfterDelete, countBeforeInsert);


What happen, if we remove the Items property. The above code doesn't work. To make it work again, we need to add a line like the following.


_db.Entities.AddObject(parent);
_db.Entities.AddObject(child);

What happen? When saving changes to database, entity framework will use ObjectStateManager to check ObjectStateEntries. When "Items" is defined, the we don't need to add the child to the cache pool. When "Items" is not defined, we need to do that. Please note that regardless whether we define "Items" property, the association between parent and child always exists. The when the Items is accessed, entity framework will intercept the request and add the children into cache pool.


#region Navigation Properties
    
        //public virtual ICollection<Entity> Items
        //{
        //    get
        //    {
        //        if (_items == null)
        //       {
        //            var newCollection = new FixupCollection<Entity>();
        //            newCollection.CollectionChanged += FixupItems;
        //            _items = newCollection;
         //       }
        //        return _items;
        //    }
        //    set
        //    {
        //        if (!ReferenceEquals(_items, value))
        //        {
        //            var previousValue = _items as FixupCollection<Entity>;
        //            if (previousValue != null)
        //            {
        //                previousValue.CollectionChanged -= FixupItems;
        //           }
        //            _items = value;
        //            var newValue = value as FixupCollection<Entity>;
        //            if (newValue != null)
        //            {
        //                newValue.CollectionChanged += FixupItems;
        //            }
        //        }
        //    }
        //}
        //private ICollection<Entity> _items;
    
        public virtual Entity Container
        {
            get { return _container; }
            set
            {
                if (!ReferenceEquals(_container, value))
                {
                    var previousValue = _container;
                    _container = value;
                    FixupContainer(previousValue);
                }
            }
        }
        private Entity _container;

        #endregion
        #region Association Fixup
    
        private bool _settingFK = false;
    
        private void FixupContainer(Entity previousValue)
        {
        //    if (previousValue != null && previousValue.Items.Contains(this))
        //    {
        //        previousValue.Items.Remove(this);
        / /   }
    
            if (Container != null)
            {
        //        if (!Container.Items.Contains(this))
        //        {
        //            Container.Items.Add(this);
        //        }
                if (ContainerId != Container.Id)
                {
                    ContainerId = Container.Id;
                }
            }
            else if (!_settingFK)
            {
                ContainerId = null;
            }
        }
    /*
        private void FixupItems(object sender, NotifyCollectionChangedEventArgs e)
        {
            if (e.NewItems != null)
            {
                foreach (Entity item in e.NewItems)
                {
                    item.Container = this;
                }
            }
    
            if (e.OldItems != null)
            {
                foreach (Entity item in e.OldItems)
                {
                    if (ReferenceEquals(item.Container, this))
                    {
                        item.Container = null;
                    }
                }
            }
        }
*/
        #endregion

May 28, 2010

Define entity from "view"

When you have Entity that mapping to more than one table in a database. You probably have options to do the mapping.


The first option is defined import all the tables into the StorageModels, but not ConceptualModels, define you entity without mapping, then manually edit the edmx file and to define the mapping. Because the entity mapping is manual, you need to define the CRUD manually. You can use store proc to do that.

<EntitySetMapping Name="DummyExes">
            <QueryView>
              select value EFTestModel.DummyEx(p.Id, p.c1, p.c2, c.c3)
              from EFTestModelStoreContainer.Dummy as p
              left join EFTestModelStoreContainer.DummyEx as c
              on p.Id = c.DummyId
            </QueryView>
          </EntitySetMapping>

The second option is defined a view in database, import that view and mapping to your entity. Because database view are normally treated readonly, you can define CRUD operation using stored proc, just like QueryView. Or you edit edmx file, cheat the ef runtime to think the view as talbe, then define insteadof trigger to do the update.

May 25, 2010

supporting state management for poco object in object context

var entry = context.ObjectStateManager.GetObjectStateEntry(poco);

For the Entity Framework to create the change-tracking proxies for your POCO classes, the following conditions must be met,


1. The class must be public, non-abstract, and non-sealed.

2. The class must implement virtual getters and setters for all properties that are
persisted.

3. You must declare collection-based relationships navigation properties as
ICollection<T>. They cannot be a concrete implementation or another interface
that derives from ICollection<T>.

May 19, 2010

Functional aspect of c#

Two generic delegate in c#, makes c# look more like a functional language, they are Action<T>, Func<T1, T2, ...>. The functional feature let you easily express your algorithm, without using the traditional design pattern. These delegates can be compared with function in javascript, and lamda in other language. For example,


interface IStrategy
{
   void Execute(object o);
}

Using design pattern, we have to write a more code to aggregate different strategies. But using Action<T> is more succinct.


Action<object> oldAction = ... ;//
Action<object> newAction = (o) => { Console.Write("preAction"); oldAction(o); Console.Write("postAction"); }
newAction(o);

If we want to go a step further, we can use function(lamda, delegate) to create functions. For example:


Func<Action<object>, Action<object>> createFunc = (func) => 
{
   return (o) => { 
             Console.Write("preAction"); 
             func(o); 
             Console.Write("postAction"); 
          };
}

Action<ojbect> newAction = createFunc(oldAction);
newAction(o);

Functional language is not new, Javascript is a functional language, and it has been doing this for a long long time. The power functional programming is that you can easily define new function easily, so that you can get interesting result of the new function.

May 15, 2010

Return statement in javascript

return expression;
//or
return;

If there is no expression, then the return value is undefined. Except for constructor, whose return value is this

Throw error

//"throw" is not limited just throwing Error, basically, you can throw anything.

//but normally, you are supposed to 

throw new Error(reason);
//or
throw {name: exceptionName, message:reason};

Array in javascript

Technically, array in javascript is not really array in the context of other language like c#, it is more like a dictionary, the index is actually used as a key of an entry in the dictionary, like to the following.

var a = [];
a[0] = "fred";
alert(a["0"] == a[0]);

It is unique in that it has a length property, when you push an new item, array can automatically increase its length. It is also unique in that it support traditional for statement like for (i=1; i< a.length; i++) { ... }. Because Array is also an object so we can also use statement like "for ( var i in x), but it is not recommended because it defeat the purpose of array.



On the other hand, we can simulate the array feature for normal object. jQuery also use the technique like the following, so that jQuery object looks like an object, but it is not.


var push = [].push;
var y = {};
push.call(y, 100);
alert(y.length); //1
alert(y[0]); //100
​

To delete an array, do not use "delete array[index]" use array.splice(index, 1);

supplant function

var template = '<table border="{border}">' +
    '<tr><th>Last</th><td>{last}</td></tr>' +
    '<tr><th>First</th><td>{first}</td></tr>' +
    '</table>';
    

var data = { first: "Carl", last: "Hollywood", border: 2 };

mydiv.innerHTML = template.supplant(data);

if (typeof String.proptotype.supplant !== 'function')
{
     String.prototype.supplant = function (o) {
        return this.replace(/{([^{}]*)}/g, 
                function(a, b) {
                  //a is $0, b is $1 (match 1)
                   var r = o[b];
                   return typeof r === 'string' ? r : a;
                });
    };
}

number, string , boolean conversion

var n = 1;
var s = "" + n; //number to string
//or
String(n);
//
n = +s;  //string to number
//
n = Number(s); //

//to convert to boolean
Boolean(value);
//or
!!

May 13, 2010

javascript split string into array by space

s.split(/\s+/);

jQuery queue and deque

You queue a serial of function, then run the function in a serial.

   var area = document.getElementById("area");

         $.queue(area, "test", function (fn) { alert("hello"); fn()});

         $("#test").click(function () {
             var data = $.dequeue(area, "test");
             data();
         });

triming leading and trailing whitespace in javascript

var result = text.replace( /^(\s|\u00A0)+|(\s|\u00A0)+$/g, "" )

String.prototype.trim = function () {
    return this.replace(
        /^\s*(\S*(\s+\S+)*)\s*$/, "$1"); 
}; 

May 12, 2010

What is javascript

Here is how Douglas Crockford defined javascript

Javascript is a functional language with dynamic objects and familiar syntax.

In his definition, he stressed the most important feature of javascript is functional language(scheme), secondary important feature is dynamic objects(self), the last part is familiar syntax(java). The familiar syntax tend to new javascript developer think that javascript is easy language, but it is not. To really unleash the power of javascript, it is necessary to have deep understanding of functional feature and dynamic objects.

adding a new memeber to all objects in javascript

function Person(){}

var root = Object.prototype;
root.say = function () { alert("hi"); }
  
Person.say();
var p = new Person();
p.say();

How to tell type of an object in javascript

We can use typeof, instanceof, and constructor to tell type of an object. But they have different effect. The "typeof" operator will return "object" for all custom types . "instanceof" will return true, if the object's constructor is in the list of constructor hierarchy. "constructor" will return the top constructor in the constructor hierarchy. Below is some test code

function Person() {}
function Developer(){}
Developer.prototype = new Person();

var dev = new Developer();

assert( typeof dev == "object");
assert( dev instanceof Developer);
assert( dev instanceof Person);
assert( dev.constructor == Person);
assert( dev.constructor != Developer);

a merge function

function merge(root){
  for ( var i = 0; i < arguments.length; i++ )
    for ( var key in arguments[i] )
      root[key] = arguments[i][key];
  return root;
}

May 7, 2010

position property

In css the position properties, such as top, left, right, bottom is auto by default, one miss conception is that they are 0. Which is not the case. So when you change from positon values (static, relative; absolute, fixed) without setting the top/left/right/bottom, there should be no change, except that absolute and fixed value will disable the margin merge.

Apr 29, 2010

Loading common model into view

In our view, there are some data which are require across all page, such as site bar data, menu, and so on. We can split them into partial view and using Html.RenderAction. But this should not be overused, because of performance issue and also it kind of violate separation of concerns. In this case, the view is controlling data. If we want to use Html.RenderPartial, data can be explictly passed in Parent View, or we can inject the data into ViewData in the stack before ActionResult is returned from controller. For example, we can use base controller, and override

public void BaseController : Controller
{
    protected override OnActionExecuting(ActionExecutingContext context)
    {
        ViewData.Add(your_data);
    }
}

Another option is use ActionFilterAttribute. It is designed to solve cross-cutting concerns, like authorization, logging and so on. But we can use it to load common data as well.


public class RequireCommonDataAttribute : ActionFilterAttribute
{
   public override void OnActionExecuting(ActionExecutionContext filterContext)
   {
        filterContext.Controller.ViewData.Add(your_data);

   }
}

public class MyController
{
   [RequireCommonDataAtrribute]
   public ActionResult View()
   {
    }
}

Apr 14, 2010

How to use bookmark in WF4

BookMark can be used to implement event driven style workflow. Below is some sample code.

//in your activity, you override Execute method, to put your workflow idle
protected override void Execute(NativeActivityContext context)
{
    string bookmark = this.BookmarkName.Get(context);
    context.CreateBookmark(bookmark,
                           delegate(NativeActivityContext ctx, Bookmark bMark, object state)
                           {
                               string input = state as string;
                               ctx.SetValue(this.Result, input);
                           });
}

//in your host, wakeup your workflow
application.ResumeBookmark(bookmarkName, text);

Loading string as workflow

Here is some sample code that loads string as workflow

Stream stream = new MemoryStream(ASCIIEncoding.Default.GetBytes(stringXaml));
Activity wf = ActivityXamlServices.Load(stream);
IDictionary results = WorkflowInvoker.Invoke(wf);              

WF4 hosting option

  • WorkflowInvoker

    Simple "method call" style of workflow execution for short-lived worklfow (persistence is not allowed)

  • WorkflowApplication

    Single host with asynchronousexecution, support persistence, provides a set of instance operation and notifications of instance life cycle events.

  • WorkflowServiceHost

    Multi-instance host for WF4 workflow services and also for workflows that are not services, support persistence, expose endpoints for instance operations, support configuration, windows server AppFabric provides IIS/WAS deployment, configuration, management, and monitoring support.

  • WorkflowInstance

    Abstract base class for creating custom hosts, provide access to lowest level of hosting capabilities.

Apr 1, 2010

Debugging T4

1. add "template debug="true""
2. cd %temp%, and find the latest generated cs file
3. Add System.Diagnostics.Debugger.Break() in you template file, below is example of simple template.
namespace Microsoft.VisualStudio.TextTemplating28A7CF507D33073FC06B43678F225DE4
{
    using System;
    
    
    #line 1 "C:\other_projects\Vs.netCustomization\T4Demo\Template1.t4"
    public class GeneratedTextTransformation : Microsoft.VisualStudio.TextTemplating.TextTransformation
    {
        public override string TransformText()
        {
            try
            {
                
                #line 3 "C:\other_projects\Vs.netCustomization\T4Demo\Template1.t4"
 
System.Diagnostics.Debugger.Break();

this.WriteLine("hello");


                
                #line default
                #line hidden
            }
            catch (System.Exception e)
            {
                e.Data["TextTemplatingProgress"] = this.GenerationEnvironment.ToString();
                throw;
            }
            return this.GenerationEnvironment.ToString();
        }
    }
    
    #line default
    #line hidden
}

The based class's dll in in C:\WINDOWS\Microsoft.Net\assembly\GAC_MSIL\Microsoft.VisualStudio.TextTemplating.10.0\v4.0_10.0.0.0__b03f5f7f11d50a3a\Microsoft.VisualStudio.TextTemplating.10.0.dll, Microsoft.VisualStudio.TextTemplating.10.0, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a he text template transformation process has two steps. In the first step, the text template transformation engine creates a class that is referred to as the generated transformation class. In the second step, the engine compiles and executes the generated transformation class, to produce the generated text output. The generated transformation class inherits from TextTransformation. Any class specified in an inherits directive in a text template must itself inherit from TextTransformation. TransformText is the only abstract member of this class.

vs.net template

User' template location C:\Documents and Settings\[user_name]\My Documents\Visual Studio 2010\Templates\ItemTemplates C:\Documents and Settings\[user_name]\My Documents\Visual Studio 2010\Templates\ProjectTemplates Global template C:\Program Files\Microsoft Visual Studio 10.0\Common7\IDE\ItemTemplates C:\Program Files\Microsoft Visual Studio 10.0\Common7\IDE\ProjectTemplates

When you manually pack your template files into zip file, do not put them into a folder and zip the folder, you should zip the files directly (not under a folder). More information can be found http://msdn.microsoft.com/en-us/library/6db0hwky%28v=VS.80%29.aspx

Mar 23, 2010

The "OnDelete" property of navigation in Entity Framework

This property affects the the behavior of entity framework when delete an entity. Here are two scenario.

[Fact]
        public void can_delete_order_with_lines_loaded()
        {
            TestEntities db = new TestEntities();
            Order o1 = db.Orders.Include("OrderLines").First();
            db.DeleteObject(o1);
            db.SaveChanges();

        }

        [Fact]
        public void can_delete_order_with_no_lines_loaded()
        {
            TestEntities db = new TestEntities();
            Order o1 = db.Orders.First();
            db.DeleteObject(o1);
            db.SaveChanges();

        }

If the values of OnDelete is Cascade, in the first case, because ef knows there are orderlines of the order, because they are loaded in memory, so it will delete the children (orderlies) first, then delete the order. In the second case, because the ef think that order has no child, because there is no children in memory, so it will just delete the order.

If the values of OnDelete is "None", in the first case, it will throw exception like "System.InvalidOperationException : The operation failed: The relationship could not be changed because one or more of the foreign-key properties is non-nullable. When a change is made to a relationship, the related foreign-key property is set to a null value. If the foreign-key does not support null values, a new relationship must be defined, the foreign-key property must be assigned another non-null value, or the unrelated object must be deleted.". Because, ef knows that it has children in memory, it can not delete them unless you manually delete the children first. In the second case, because there is no children in memory, ef will just delete the order.

Please note that ef does not consider the foreign key property between tables in database, when it do the above operations. So if the above behavior is just delete the parent, but in database there is no such cascade for the foreign key, it will failed as well. But the values of navigation does affect the database generation script if you choose to generate database from the model. If it is cascade, it will generate a foreign key with cascade, if not it will generate a foreign key with no cascade. But this is design time behavior, but not runtime behavior

Mar 8, 2010

jQuery.globalEval

$.globalEval utility is a useful function, because it is preferred compare with eval in calling ajax script. What it does is adding script tag to document, then remove it later.

globalEval: function( data ) {
 if ( data && rnotwhite.test(data) ) {
  // Inspired by code by Andrea Giammarchi
  // http://webreflection.blogspot.com/2007/08/global-scope-evaluation-and-dom.html
  var head = document.getElementsByTagName("head")[0] || document.documentElement,
   script = document.createElement("script");

  script.type = "text/javascript";

  if ( jQuery.support.scriptEval ) {
   script.appendChild( document.createTextNode( data ) );
  } else {
   script.text = data;
  }

  // Use insertBefore instead of appendChild to circumvent an IE6 bug.
  // This arises when a base node is used (#2709).
  head.insertBefore( script, head.firstChild );
  head.removeChild( script );
 }
},

$.globalEval("document.write('<h1>hello</h1>');");

Mar 7, 2010

jQuery event binding internal

The event binding in jQuery is very interesting. It is designed with several goals. 1. It should works consistently across browsers. 2. It should allows attaching more than one handler function to a event. The event registration information is saved in a jQuery.data(elem). In jQuery.cache actual data store for all these data. A piece of data in the cache can be assigned to the element. Below is some pesudo code to demonstrate the data structure

function fn(..){..}
fn.guid = ..

handleObject = { data: ..,
                 guiid: ..,
                 namespace: "..",
                 type: "click",
                 handler : fn
                 }

clickEventHandlers = handleObject[];

eventsOfElem = [ clickEventHandlers, dblClickEventHandlers, .. ];

elemData = jQuery(elem);

elemData = { events :eventsOfElm, handle : bindingFn }

functionBindingFn = function (..)
{
   jQuery.event.handle.apply(eventHandle.elem, arguments );
}

elem.addEventListener( type, functionBindingFn , false );

shadow copy in jQuery

We can use jQuery to copy(clone) an object to a new object, so the old object and the new object can evolved independently.


//shadow copy object
var oldObj = { name: "fred" };
var newObj = jQuery.extend({}, oldObj);
newObj.name = "jeff";
alert(oldObj.name); //fred


To do similar thing for array, we can use [].slice() method


//clone array
var newArray = arr.slice(0);

copy array in javascript

var x = [1, 2];
var y = x.slice(0);
y[0] = 100; //y --> [100, 2]
alert(x[0]); //1
​

The new copy evolve independently of the old copy.

Mar 6, 2010

Easy Setter Functions in jquery

One of the new feature in jQuery 1.4 is setter function. For a while now, you’ve been able to pass a function into .attr() and the return value of that function is set into the appropriate attribute. This functionalilty has now been extended into all setter methods: .css(), .attr(), .val(), .html(), .text(), .append(), .prepend(), .before(), .after(), .replaceWith(), .wrap(), .wrapInner(), .offset(), .addClass(), .removeClass(), and .toggleClass(). Addtionally, for the following options, the current value of the item is passed into the function as the second argument: .css(), .attr(), .val(), .html(), .text(), .append(), .prepend(), .offset(), .addClass(), .removeClass(), and .toggleClass().

$('a[target]').attr("title", function(i, currentValue){
  return currentValue+ " (Opens in External Window)";
});

$("#xx").text(function (i, currentValue) { 
    return currentValue+ "modified by fred";
});

In jQuery 1.32, the attr is defined as follow

attr: function( name, value, type ) {
 var options = name;

 // Look for the case where we're accessing a style value
 if ( typeof name === "string" )
  if ( value === undefined )
   return this[0] && jQuery[ type || "attr" ]( this[0], name ); //jQuery.attr(this[0], name);

  else {
   options = {};
   options[ name ] = value;
  }

 // Check to see if we're setting style values
 return this.each(function(i){
  // Set all the styles
  for ( name in options )
   jQuery.attr(
    type ?
     this.style :
     this,
    name, jQuery.prop( this, options[ name ], type, i, name )
   );
 });
},

But in 1.4.2, it is defined as followed.

attr: function( name, value ) {
 //the access function is defined in core module
 return access( this, name, value, true, jQuery.attr );
},

We can see that a protected access function really does the job.

// Mutifunctional method to get and set values to a collection
// The value/s can be optionally by executed if its a function
// elems is an array
// key, value is dictionary pair
// exec is an boolean, put it true if you want to call the value as a function 
// fn is a function used to access the element
// passe is usually false, I am not sure it is intention 
function access( elems, key, value, exec, fn, pass ) {
 var length = elems.length;
 
 // Setting many attributes
 //if key is complext object like { name: "fred", age: 18 }
 if ( typeof key === "object" ) {
  for ( var k in key ) {
   access( elems, k, key[k], exec, fn, value );
  }
  return elems;
 }
 
 // Setting one attribute
 if ( value !== undefined ) {
  // Optionally, function values get executed if exec is true
  //if value is a function which dyanmically return a true value
  //and pass is null 
  //and exec is true
  //then set exec to true
  //if caller pass is undefined and caller want to evalueate the value function and value is a function
  exec = !pass && exec && jQuery.isFunction(value);
  
  //your callback function is like
  //function( elem, name, value)
  //for example
  //Query.fn.css = function( name, value ) {
  // return access( this, name, value, true, function( elem, name, value ) {
  //or function( elem, name, value, pass ) 
  //for example attr: function( elem, name, value, pass )
  for ( var i = 0; i   length; i++ ) {
   //fn( elems[i], key, exec ? value.call( elems[i], i, fn( elems[i], key ) ) : value, pass );
   //exec means your value to set is need to be determined by the callback functio
    var elem = elems[i];
    var valueToSet;
    if (exec)
    {
    //get the current value first
    var currentValue = fn(elem, key);
    //then call the value function to get the new value
    valueToSet = value.call(elem, i , currentValue);
    }
    else
    {
    valueToSet = value;
       }
    
    fn(elem, key, valueToSet, pass);
  }
  
  return elems;
 }
 
 // Getting an attribute
 return length ? fn( elems[0], key ) : undefined;
}

jQuery.fn.css method also used this protected method.

jQuery.fn.css = function( name, value ) {
 //rather call function(elem, name, value) call
 //access function, and let access function to call back function(elem, name, value)
 return access( this, name, value, true, function( elem, name, value ) {
  if ( value === undefined ) {
   return jQuery.curCSS( elem, name );
  }
  
  if ( typeof value === "number" && !rexclude.test(name) ) {
   value += "px";
  }

  jQuery.style( elem, name, value );
 });
};

how to test whether a object has member

Object.prototype.hasMember = function (memberName)
  {
    return memberName in this;

}
  
var attrFn = {
    val: 1,
    css: 2,
    html: 3,
    text: 4,
    data: 5,
    width: 6,
    height: 7,
    offset: 8
  };
  
alert(attrFn.hasMember("val"));

/*
don't use attrFn["val"], because is possible that you defined a member
attFn = { val : undefined }, in val is a member but its value is null,
this function also traverse the prototype chain, if you want to check direct member only use "hasOwnProperty".
*/

Mar 5, 2010

how jQuery use sizzle selector

The call stack is as follow
return new jQuery.fn.init( selector, context ); 
 return (context || rootjQuery).find( selector ); 
 jQuery.find( selector, this[i], ret ); 
return makeArray(context.querySelectorAll(query), extra ); 

//if browser does not support W3C Selectors API
//will call this
//var Sizzle = function(selector, context, results, seed) { 



//jQuery.find = Sizzle

the internal of $.end() method

jQuery methods that alter the original jQuery wrapper set selected are considered to be destructive.The reason being that they do not maintain the original state of the wrapper set. Not to worry;nothing is really destroyed or removed. Rather, the former wrapper set is attached to a new set.

pushStack: function( elems, name, selector ) {
 // Build a new jQuery matched element set
 var ret = jQuery();

 if ( jQuery.isArray( elems ) ) {
  push.apply( ret, elems );
 
 } else {
  jQuery.merge( ret, elems );
 }

 // Add the old object onto the stack (as a reference)
 ret.prevObject = this;

 ret.context = this.context;

 if ( name === "find" ) {
  ret.selector = this.selector + (this.selector ? " " : "") + selector;
 } else if ( name ) {
  ret.selector = this.selector + "." + name + "(" + selector + ")";
 }

 // Return the newly-formed element set
 return ret;
},

 end: function() {
  return this.prevObject || jQuery(null);
 },

$.extend

extend function is used to extend jQuery object. We can use it to do extend normal object, jQuery utility and jQuery method.

jQuery.extend = jQuery.fn.extend = function() { }


var x = {};
$.extend( x, {"name": "fred" });
alert(x.name); //fred
$.extend({"xxx": "fred" });
alert($.xxx); //fred

$.fn.extend( {"yyy": "jeff" });
alert($().yyy); //jeff

$(fn)

In jQuery, $(fn) is a short cut to $.fn.ready(fn). First you call the $.fn.ready(fn), in the function it call jQuery.bindReady() first, then add the fn to a readyList. In the jQuery.bindReady() function, it try to hookup DOMContentLoaded handler, in IE the event is called onreadystatechange. In the DOMContentLoaded handler, it call the jQuery.ready() function, in side of this function, the function in the readyList is invoked

if ( jQuery.isFunction( selector ) ) {
  rootjQuery.ready( selector );
}

ready: function( fn ) {
        // before hookup event, bind the real event first
 // Attach the listeners
 jQuery.bindReady();

 // If the DOM is already ready
 if ( jQuery.isReady ) {
  // Execute the function immediately
  fn.call( document, jQuery );

 // Otherwise, remember the function for later
 } else if ( readyList ) {
  // Add the function to the wait list
  readyList.push( fn );
 }

 return this;
},

bindReady: function() {
    //ensure this method is only run once
 if ( readyBound ) {
  return;
 }

 readyBound = true;

 // Catch cases where $(document).ready() is called after the
 // browser event has already occurred.
 if ( document.readyState === "complete" ) {
  return jQuery.ready();
 }

 // Mozilla, Opera and webkit nightlies currently support this event
 if ( document.addEventListener ) {
  // Use the handy event callback
  document.addEventListener( "DOMContentLoaded", DOMContentLoaded, false );
  
  // A fallback to window.onload, that will always work
  window.addEventListener( "load", jQuery.ready, false );

 // If IE event model is used
 } else if ( document.attachEvent ) {
  // ensure firing before onload,
  // maybe late but safe also for iframes
  document.attachEvent("onreadystatechange", DOMContentLoaded);
  
  // A fallback to window.onload, that will always work
  window.attachEvent( "onload", jQuery.ready );

  // If IE and not a frame
  // continually check to see if the document is ready
  var toplevel = false;

  try {
   toplevel = window.frameElement == null;
  } catch(e) {}

  if ( document.documentElement.doScroll && toplevel ) {
   doScrollCheck();
  }
 }
},

if ( document.addEventListener ) {
 DOMContentLoaded = function() {
  document.removeEventListener( "DOMContentLoaded", DOMContentLoaded, false );
  jQuery.ready();
 };

} else if ( document.attachEvent ) {
 DOMContentLoaded = function() {
  // Make sure body exists, at least, in case IE gets a little overzealous (ticket #5443).
  if ( document.readyState === "complete" ) {
   document.detachEvent( "onreadystatechange", DOMContentLoaded );
   jQuery.ready();
  }
 };
}

jQuery is an array like object

In javascript, array is object, so there are lots benefit to use array object, like the length, index, slice. jQuery is a collection of element, so use it as an array is very convenient. it seems to me that jQuery object can not use array as prototype for some reason, so how we can use an object like an array, please read the following code.

var x = {};
x[0] = "zero";
x[1] = "one"

var array =  Array.prototype.slice.call(x, 0 );
alert(array .length); //0
  
alert(x[0]);
alert(x.length); //undefined
alert(x instanceof Array); //x is not an array

Array.prototype.push.call(x, "zero overwritten");
alert(x[0]); //zero x
alert(x[1]); //one
alert(x.length); //now looks like an array, but the it is till 1

array =  Array.prototype.slice.call(x, 0 );
alert(array .length); //1


Array.prototype.push.call(x, "one overwritten");
alert(x[1]); //one x
alert(x.length); //2, length is determine how many elements your push

array =  Array.prototype.slice.call(x, 0 );
alert(array .length); //2

Mar 4, 2010

arguments.callee

var sum = function(i)
{
  if (i == 1)
  {
    return i; 
  }
  else 
  {
    
   return i + arguments.callee(i-1); 
  }
}
alert(sum(3));
​

makeArray function

function highest(){ 
  //return arguments.slice(1).sort(function(a,b){ 
  return makeArray(arguments).sort(function(a,b){ 
    return b - a; 
  }); 
} 
 
function makeArray(array){ 
  return [].slice.call( array ); 
} 

Mar 2, 2010

change in mvc2 project

  1. project file change, replace guid {603c0e0b-db56-11dc-be95-000d561079b0} with {F85E285D-A4E0-4152-9332-AB1D724D3325} in ProejctTypeGuids nodes
  2. Reference dll change, update all reference in web.config files and project files update runtime binding.
    <runtime>  <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">    <dependentAssembly>      
    <assemblyIdentity name="System.Web.Mvc"           publicKeyToken="31bf3856ad364e35"/>      <bindingRedirect oldVersion="1.0.0.0" newVersion="2.0.0.0"/>    </dependentAssembly>  </assemblyBinding></runtime>
    
  3. javascript file changes, copy new script from new project to old project

Feb 16, 2010

Covariant/Contravariant Support for generic interface .net 4.0

The introduction of generic in .net makes covariant/contravariant support a very interesting topic. Now .net 4, there are some support for generic interface. But covariant and contravariant for generic class is still not supported with good reason, so you can not write the following code in .net 2 and .net 4

[TestMethod]
public void no_covariant_support_for_generic_class()
{
    //can not be compiled
    List<object> list = GetListOfString();
    //if support, what if use write
    list[0] = 1; //this is not secure

}

public List<string> GetListOfString()
{
    return new List<string>();
}

[TestMethod]
public void no_contravariant_support_generic_class()
{
    //can not be compiled
    Process(new List<string>());
}

public void Process(List<object> input)
{
    //if this is support, what if user write
    input[0] = 1; //this is not secure
    
}

There reason of not supporting covariant or contravariant is that .net need to support type safety. But as long as, we can restrict user to use object in type safe fashion, we should allow it. .NET 4 provide covariant and contravariant support for generic interface and delegate by using "in" and "out" to decorate type parameter so that the interface can be only use the type parameter in "in" or "out" fashion. Here is an example

interface IHouse<out T>
{
   T Checkout();
}

In the above example, T can be only appear in the return type position. So you can compile the following code.

If you want write the following code, then will be a compiler error like "Invalid variance: The type parameter'T' must be contravariantly valid on 'Checkin(T)'. 'T' is covariant

interface IHouse<out T>
{
   T Checkout();
   void CheckIn(T input);
}

To fix this error, we can add new contravariant type parameter using "in", like the following, and change the client code as well. The K type parameter will only appear at the input parameter position.

interface IHouse<out T, in K>
{
   T Checkout();
   void CheckIn(K input);
}


class House<T, K> : IHouse<T, K>
{
   public T Checkout()
   {
     return default(T);
   }

   public void Checkin(K input)
   {
       Console.WriteLine(input.GetType());
   }
}

IHouse<string, object> x = new House<string, object>();
IHouse<object, string> y = x;
object rtn = y.Checkout();
y.Checkin("hello"); //it actually call x.Checkout(object input), which is fine

Ok, so now what is the big deal of the covariant and contravariant. The short answer is that, it make it possible for you to write code you think it make sense but you couldn't before .net 4. So now you can write the code below which is not possible in .net 2. Here is covariant example in .net 4


//in .net 2, you have to write
IEnumerable<string> list = new List<string> { "1", "2", "3" };
//in .net 4, you can write the following because of the "out" keyword, it support covariant
//public interface IEnumerable<out T> : IEnumerable
IEnumerable<object> list = new List<string> { "1", "2", "3" };

//but you still can not write this in .net 4,  
//Covariance and contravariance in generic type parameters are supported for reference types, //but they are not supported for value types.
IEnumerable<object> list = new List<string> { 1, 2, 3 };

compiled time support for Covariant and Contravariant in .net 2.0


In .net 2.0, A delegate can be associated with a named method, this facility provide support for covariant and contravariant as the following sample code shows.

        delegate void ProcessString(string item);
        delegate object GetObject();

        void RunObject(object item)
        {
            Console.WriteLine(item);
        }

        string RetrieveString()
        {
            return string.Empty;
        }

        [TestMethod]
        public void delegate_shortcut_suport_contraVariant()
        {
            //delegate with derived type parameter input <--(accept) method with base type parameter input
            ProcessString processString = this.RunObject;
            processString("string");
            //-->
            RunObject("string");
        }

        [TestMethod]
        public void delegate_shortcut_support_coVariant()
        {
            //delegate with base type output <--(accept) method with derived type output 
            GetObject getObject = this.RetrieveString;
            object returnValue = getObject();
            //-->
            object returnValue2 = this.RetrieveString();
        }

Feb 15, 2010

The semantics of c# interface

We all use interface construct in c#. Recently I came across the Haack's blog Interface Inheritance Esoterica, I decided to find out more. So I write a very simple example like below

public interface IAnimal
    {
        void Walk();
    }

    public interface IBird : IAnimal
    {
        void Fly();
    }

    public class Bird : IBird 
    {
        void IBird.Fly()
        {
            throw new NotImplementedException();
        }

        void IAnimal.Walk()
        {
            throw new NotImplementedException();
        }
    }

Then I use ILDASM to examine the IL generated, the Bird class actually implement two interface,


//Bird IL
.class public auto ansi beforefieldinit DemoInterface.Bird
       extends [mscorlib]System.Object
       implements DemoInterface.IBird,
                  DemoInterface.IAnimal
{
} // end of class DemoInterface.Bird

//
.class interface public abstract auto ansi DemoInterface.IBird
       implements DemoInterface.IAnimal
{
} // end of class DemoInterface.IBird


We can see that the Walk is not a member of IBird, the semantics here is the class that implement IBird, should also implement IAnimal. So I change my code to be the following

public interface IAnimal
    {
        void Walk();
    }

    public interface IBird //: IAnimal
    {
        void Fly();
    }

    public class Bird : IBird , IAnimal
    {
        void IBird.Fly()
        {
            throw new NotImplementedException();
        }

        void IAnimal.Walk()
        {
            throw new NotImplementedException();
        }
    }

This time the generated IL for Bird is exactly the same as previous code. The only difference is that IBird does not "implement" IAniaml. In the first example, the semantics of Bird implementing IBird is as following


  1. The class is an IBird (or we can say it has gene.) Even the IBird interface has no member it still has semantics, System.Web.UI.INamingContainer it is an example.
  2. The Bird class is an IAnimal
  3. The Bird class implements IBird member Fly()
  4. IAniaml's member is not the member of IBird, but IBird support IAnimal's memeber
  5. The class that implements IBird, also need to implements IAnimal.
  6. The Bird class implements IAniaml member Walk(), because of previous semantics

The the original intention of interface is contract, and a contact can be a composite, which means a contract can be a combination of other contract(s).

Feb 12, 2010

Get started with EntityFramework 4

We have the following option to using EntityFramework4


  • Using designer
    • Model First

      We can design model first , then generate the sql for the model, use the sql to generate the database, and then connect the model to the database. The process of sql gereration is customizable. You change this workflow (process) and the template of the sql, these file are located at C:\Program Files\Microsoft Visual Studio 10.0\Common7\IDE\Extensions\Microsoft\Entity Framework Tools\DBGen.
      ASO.NET team blog has an article about this a approach.

    • Database First

      This is a classic approach. You have your database in the first place, and generate model based on that.


    Since you have use the designer, you have other customization options that are related to code generation from the model created by designer. But default designer use a Code generation strategy call "Default". If you model file name is "model.edmx", the strategy generate a cs file "model.cs" file, and the code generate the ObjectContext, and other entities, and so forth. The entities


    [EdmEntityTypeAttribute(NamespaceName="CrmModel", Name="Post")]
    [Serializable()]
    [DataContractAttribute(IsReference=true)]
    public partial class Post : EntityObject
    { ... }
    

    Since EF4 use t4 to generate code, you can customized the t4 to customized the code generation by right clicking the designer and adding code generation item. When you add code generation item, this will turn off the default code generation strategy. Your model.cs file will be empty.


    There are a couple code generation item available now. When you add an item a t4 file (*.tt) is added to the project. You can customize the tt file as want.


  • Use code only
    Feature CTP Walkthrough: Code Only for the Entity Framework (Updated)

Jan 22, 2010

Nullable notes

Nullable is value type object. But the following code can be compiled.

int? x = null

Isn't "null" supposed to be used with reference type, why it can be assigned with value type? It turns out to be just syntax suger, and the compiler emit the following code. It does not call any constructor.


IL_0001:  ldloca.s   x
IL_0003:  initobj    valuetype [mscorlib]System.Nullable`1<int32>

However, if you write the following code, compiler will emit the msil like the following. It call the constructor.


int? y = 123;

IL_0009:  ldloca.s   y
IL_000b:  ldc.i4.s   123
IL_000d:  call       instance void valuetype [mscorlib]System.Nullable`1<int32>::.ctor(!0)

//called
public Nullable(T value) {
    this.value = value; 
    this.hasValue = true;
} 

Nullable has two implicit converter that help you to write the following code.


int? y = 246; //implict conversion that create a Nullable on the fly, using the following implicit operator
public static implicit operator Nullable<T>(T value) { 
    return new Nullable<T>(value);
}

int z = (int)y; //explict conversion using the following explicit operator, this is not cast operation, this may throw exception, if Nullable.HasValue is false
public static explicit operator T(Nullable<T> value) { 
    return value.Value;
} 

public T Value {
    get { 
        if (!HasValue) { 
            ThrowHelper.ThrowInvalidOperationException(ExceptionResource.InvalidOperation_NoValue);
        } 
        return value;
    }
}


You may wonder why we don't can write the following code?


int z = y; //error, you can not do this, because there is not implicit conversion

This is because we don't have implict converter to convert a Nullable<T> to T. If we have had the following operator, we will be able to write the code above.


public static implicit operator T(Nullable<T> value) { 
   if (!HasValue) 
   { 
       return value.Value; 
   }
   else
   {
       return default(T);
   }
} 

But Why we have explicit converter but not implicit converter. If we have had this operator, there will be no difference in using Nullable<T> and T. The purpose of Nullable<T> is to use value type T like a reference type. That is why in the Value property will throw exception if there is not value, we want to using a value type like a reference type!!! See the following example.

int? x = null;
    if (x == null)//msil will be like if (x.HasValue)
    {
       Console.WriteLine("x is null");
    }

Although we can not implicitly convert Nullable<T> to T, but C# compiler and CLR, allow us use Nullable<T> like T in most of case. So the following code is legal.


int? y = 0;
y++;

//compiler will emit the following code
//if (y.HasValue)
//{
//    int temp = y.Value;
//    temp++;
//    y = temp;
//}

Int32? x = 5;
Console.WriteLine (x.GetType()); // it is "System.Int32"; not "System.Nullable<int32>"

Int32? n = 5;
Int32 result = ((IComparable) n).CompareTo(5); // Compiles & runs OK
Console.WriteLine(result); // 0

/*
If the CLR didn't provide this special support, it would be more cumbersome for you to write code to call an interface method on a nullable value type. You'd have to cast the unboxed value type first before casting to the interface to make the call: */

Int32 result = ((IComparable) (Int32) n).CompareTo(5); // Cumbersome

Null-Coalescing ?? operator works with reference type.


string s = null
//
string s2 = s ?? "something";
// this line is be compiled to 
  IL_0003:  ldloc.0
  IL_0004:  dup
  IL_0005:  brtrue.s   IL_000d
  IL_0007:  pop
  IL_0008:  ldstr      "something"
  IL_000d:  stloc.1

But c# compiler, make "??" works for Nullable as well. But underneath the emitted code is completely different, like the following


int z = y ?? 100;

//it is equivalent as 
//z = (y.HasValue) ? y.Value : 100

//it is also equivalent as 
//z = y.GetValueOrDefault(100);

To sum this up, the purpose of Nullable type is to let a value type has a null value, but compiler also let us use it as value type as well.