Dec 16, 2007

code smith Render

In codesmith, there is render method

            

            

Sub-Template

To register a sub-template, you include a register directive in the master template. You can include as many Register directives as you like, so one master template can include multiple sub-templates. Sub-templates can be nested.

            <%@ Register Name="Header" Template="Header.cst" MergeProperties="True"
            ExcludeProperties="IncludeMeta" %>
            

The MergeProperties attribute specifies whether the properties of the sub-template should be dynamically added to the master template's properties. If you omit this attribut, it default to false. When you merge the properties, the properties of the sub-template will be displayed on the property sheet of the main template when the main template is open in codesmith explorer or codesmith studio. This makes it easy to prompt for all the properties that are required for the entire code-genertion process on a single propery sheet.

You may want to share properties between a master template and sub-templates. Suppose your master template and sub-template both have a string property named "Server", when your prompt for this property in the master template, only the master template's copy of the property receives a value. To set the property in the sub-template, you use the CopyPropertiesTo method of the master template. This method matches properties from the master template to the sub-template on the basis of name and type. If it finds an exact match, it copies the value from the master template to the sub-template.

            //instantiate the sub-template
            Header header - this.Create
(); this.CopyPropertiesTo(header);

If you want to specify a property individually, you can use the following code.

            Header header = this.Create
(); //this is strongly typed template, so that you can use its tempalte directly. header.IncludeMeta = true;

However, you don't need to register a template if you don't need to create a strongly type sub template.

            CodeTemplate genericTemplate = this.GetCodeTemplateInstance("sub.cst");
            genericTemplate.SetProperty("TemplateName", "Sub template");
            

After you've registered a sub-template and set its properties, you can render the sub-template. You can render the sub-template directly to the output of the main template

            Header header = this.Create
(); header.Render(this.Response);

Alternatively, you can render the sub-template to a seperate file. This is useful when you want to create multiple output files as part of a single code-generation process.

            Header header = this.Create
(); header.RenderToFile("SomeFile.txt");

Another way to to use sub-template is to render sub-template to a seperate file. This is useful when you want to create multiple output files as port of single code-generation process.

            // instantiate the sub-template
            Header header = this.Create
(); // render the sub-template to a separate file header.RenderToFile("Somefile.txt");

If main template and sub-template share the same base template and main template has all the properties that sub-template has, you don't need to register the sub-template simply because you want to initialized properties in sub-template. All you need to copy the by using "this.CopyPropertiesTo(header);". However if the main template does not have all the properties that sub-template has , you need to register the sub-template , so that use can fill in the properties of sub-template in the main template.

Oct 31, 2007

xsd maxOccurs="2" minOccurs="1"

XSD allow you to specified maxOccurs="2" minOccurs="1" in sequence and and element. What is the difference. Below is section

            
            
            
            
            
            
            
            
            
sequence with maxOccurs more than 1 means you can repeat the whole section more than one time, element with maxOccurs more than 1 means you can repeat that element for more than one time. There is no difference if the sequence has only one element. You can put maxOccurs on the sequence or the element. If there are more than one element in sequence there will be difference.
            
            1
            2
            3
            dfsdf
            1
            2222
            fred
            
            
Generally, we should put maxOccur on element rather on sequence. Or you can convert a sequence into a complex type and make that a element of that type, and put maxOccur on that element.

Oct 28, 2007

Debugging Form Authentication API in asp.net

  FormsAuthentication.RedirectFromLoginPage(UsernameText.Text, false);
This method performs serveral tasks at once:
  1. It creates an authentication ticket for the user.
  2. It encryptes the information from the authentication ticket.
  3. It creates a cookie to persist the encrypted ticket information.
  4. It adds the cookie to the HTTP response, sending it to the client.
  5. It redirects to the user to orignal requested page.

The second parameter of RedirectFromLoginPage() indicates whether a persistent cookie should be created. Persistent cookies are stored on the user’s hard drive and can be reused for later visits. If the value is false and user closes the browser, the cookie is immediately removed. The cookie returned is something like

"Set-Cookie: FormCookie=0AA27321DF30103594921784C0C2D721208FC509952739C2E5530F3BE9F8472E6A68283AF5C7DD539D56D2BD2CC39FCC3E488503B8299BCE04307B934B4B8B5C91224CBDBD81EAB3753F4FDEC0518199; path=/; HttpOnly".

If it is true, it will set the cookie for persisted for the timeout time in the forms tag. The cookie you get is something like

"Set-Cookie: FormCookie=C63E21885ED5930E10A2D10CC262FA0929B68DD467B05566226346CC612DE7728244C8122D8445085616412C45A9B47F2B430F369EA68E5596A9642E4FAD4413AA3D170056130847E175292F01FEED1A; expires=Mon, 29-Oct-2007 04:43:35 GMT; path=/; HttpOnly".

If you want to dynamically set the cookie's expiration date, you need to do manually.

if (FormsAuthentication.Authenticate(UsernameText.Text, PasswordText.Text))
{
            // Create the authentication cookie
            HttpCookie AuthCookie;
            AuthCookie = FormsAuthentication.GetAuthCookie(
            UsernameText.Text, true);
            AuthCookie.Expires = DateTime.Now.AddDays(10);
            // Add the cookie to the response
            Response.Cookies.Add(AuthCookie);
            // Redirect to the originally requested page
            Response.Redirect(FormsAuthentication.GetRedirectUrl(
            UsernameText.Text, true));
}
else
{
            // User name and password are not correct
            LegendStatus.Text = "Invalid username or password!";
}

When you Signout, you will get the response in header

"Set-Cookie: FormCookie=; expires=Tue, 12-Oct-1999 00:00:00 GMT; path=/; HttpOnly".

FormCookie is actually a ticket in a serialized string. You can also manually created the ticket, and then manually create a cooke and assign the cookie value with the ticket string.

private void btnLogin_Click(object sender, System.EventArgs e)
{
            SqlDataReader sdr;
            // Create a connection
            SqlConnection sc = new SqlConnection(Application["DSN"].ToString());

            // Open the database connection
            sc.Open();

            // Create a command to get the user
            SqlCommand cmd = new SqlCommand("GetUser '" + txtEmail.Text + "', '" + txtPassword.Text + "'", sc);

            // Execute the command
            sdr = cmd.ExecuteReader();

            // Attempt to read the first record
            if (sdr.Read())
            {
            // close the datareader
            sdr.Close();
            // Get the list of roles the user is in
            SqlDataReader drRoles;
            SqlCommand cmdRoles = new SqlCommand("GetRoles '" + txtEmail.Text + "'", sc);
            ArrayList arRoles = new ArrayList();

            // Execute the command
            drRoles = cmdRoles.ExecuteReader();

            // Get a string builder to store the roles in a csv list
            System.Text.StringBuilder bldr = new System.Text.StringBuilder();

            // Loop through the list of roles and get them
            while (drRoles.Read())
            {
            bldr.Append(drRoles["Role"]);
            bldr.Append(",");
            }

            // Strip the last comma
            bldr.Remove(bldr.Length - 1, 1);

            // Create an authentication ticket
            // Place a serialized representation of the roles into the authentication ticket
            FormsAuthenticationTicket ticket = new FormsAuthenticationTicket(1, txtEmail.Text, DateTime.Now, DateTime.Now.AddMinutes(20), false, bldr.ToString());

            // Get the encrypted version of the ticket
            string strEncrypted = FormsAuthentication.Encrypt(ticket);

            // Put it into a cookie
            HttpCookie hc = new HttpCookie(FormsAuthentication.FormsCookieName, strEncrypted);
            hc.Expires = DateTime.Now.AddMinutes(20);


            // Add it to the cookies collection
            Response.Cookies.Add(hc);

            // Redirect the user to the page they requested
            string strReturnURL = Request.Params["ReturnUrl"].ToString();
            if (strReturnURL != "") Response.Redirect(strReturnURL);
            }
            else
            {
            // Show a message that the credentials are invalid
            lblInvalidPassword.Visible = false;
            }
            }

Oct 27, 2007

Improving the Performance of a Reusable Authorization Framework

  • Batch authorization queries whenever possible to avoid frequent out-of-process round trips. For example, retrieve roles for multiple users in a single request.
  • Cache the authorization data close to where you will use it with an in-memory store, such as a Hashtable. The cache also reduces dependencies on the location and organization of the underlying store. You might also want a separate cache for each physical computer, for performance and increased security.
  • Implement scheduled or on-demand refreshes of the cache information.
  • Implement lazy initialization of the authorization cache to avoid retrieving authorization information when no access checks will occur.

How to Change the Principal in an ASP.NET Application

protected void Application_AuthenticateRequest(Object sender, EventArgs e)
{
   // Check whether there is a current user and that
   // authentication has occurred.
   if (!(HttpContext.Current.User == null))
   {
      IIdentity CurrentUserIdentity = HttpContext.Current.User.Identity;
      // Check to see whether the Principal was cached.
      string CachedPrincipalKey = "CachedPrincipal" + id.Name;
      if (HttpContext.Current.Cache[CachedPrincipalKey] == null)
      {
            // Load the principal by calling the GetPrincipal method.
            HttpContext.Current.Cache.Add(
            CachedPrincipalKey,
            GetPrincipal(CurrentUserIdentity),
            null,
            DateTime.MaxValue,
            new TimeSpan(0,30,0),
            CacheItemPriority.Normal,
            null);
      }
      HttpContext.Current.User = (IPrincipal)
      HttpContext.Current.Cache[CachedPrincipalKey];
   }
}


            IF EXISTS (select * from dbo.sysobjects where id = object_id(N'[UserRoles]') and
            OBJECTPROPERTY(id, N'IsUserTable') = 1)
            DROP TABLE [UserRoles]
            GO
            CREATE TABLE [UserRoles] (
            [UserName] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
            [Role] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL
            ) ON [PRIMARY]
            GO
            INSERT INTO [Claims].[dbo].[UserRoles]([UserName], [Role])
            VALUES('Chris', 'Admin')
            INSERT INTO [Claims].[dbo].[UserRoles]([UserName], [Role])
            VALUES('Doug', 'Admin')
            INSERT INTO [Claims].[dbo].[UserRoles]([UserName], [Role])
            VALUES('Doug', 'Manager')
            GO

            private IPrincipal GetPrincipal(IIdentity user)
            {
            //Get the roles from the table based on a user name only.
            string SQL =
            "SELECT Role FROM UserRoles WHERE UserName = '" + user.Name + "'";
            SqlConnection MyConnection = new SqlConnection(
            "data source=localhost;initial catalog=Claims;Integrated Security=SSPI");
            SqlCommand MyCommand = new SqlCommand(SQL, MyConnection);
            MyConnection.Open();
            SqlDataReader MyDataReader = MyCommand.ExecuteReader();
            ArrayList alRoles = new ArrayList();
            // Load the roles into an ArrayList.
            while (MyDataReader.Read())
            alRoles.Add(MyDataReader.GetString(0));
            MyDataReader.Close();
            MyCommand.Dispose();
            MyConnection.Close();
            MyConnection.Dispose();
            // Convert the roles to a string[], and load GenericPrincipal.
            string[] myRoles = (string[])al.ToArray(typeof(string));
            return new GenericPrincipal(
            new GenericIdentity(user.Name, user.GetType()),
            myRoles);
            }

Impersonator

Please also reference this article http://blogs.msdn.com/jimmytr/archive/2007/04/14/writing-test-code-with-impersonation.aspx and http://www.123aspx.com/redir.aspx?res=31987
public class Impersonator
{
   // private members for holding domain user account credentials
   private string username = String.Empty;
   private string password = String.Empty;
   private string domain = String.Empty;
   // this will hold the security context for reverting back to the client after impersonation operations are complete
   private WindowsImpersonationContext impersonationContext = null;

   // disable instantiation via default constructor
   private Impersonator()
   {}

   public Impersonator(string username, string domain, string password)
   {
     // set the properties used for domain user account
     this.username = username;
     this.domain = domain;
     this.password = password;
   }

   private WindowsIdentity Logon()
   {
     IntPtr handle = new IntPtr(0);
     handle = IntPtr.Zero;

            const int LOGON32_LOGON_NETWORK = 3;
            const int LOGON32_PROVIDER_DEFAULT = 0;

     // attempt to authenticate domain user account
     bool logonSucceeded = LogonUser(this.username, this.domain, this.password, LOGON32_LOGON_NETWORK, LOGON32_PROVIDER_DEFAULT, ref handle);

     if(!logonSucceeded)
     {
       // if the logon failed, get the error code and throw an exception
       int errorCode = Marshal.GetLastWin32Error();
       throw new Exception("User logon failed. Error Number: " + errorCode);
     }

     // if logon succeeds, create a WindowsIdentity instance
     WindowsIdentity winIdentity = new WindowsIdentity(handle);

     // close the open handle to the authenticated account
     CloseHandle(handle);

     return winIdentity;
}


public void Impersonate()
{
   // authenticates the domain user account and begins impersonating it
   this.impersonationContext = this.Logon().Impersonate();
}


   public void Undo()
   {
     // rever back to original security context which was store in the   WindowsImpersonationContext instance
     this.impersonationContext.Undo();
   }

   [DllImport("advapi32.dll", SetLastError=true)]
   private static extern bool LogonUser(string lpszUsername,
   string lpszDomain,
   string lpszPassword,
   int dwLogonType,
   int dwLogonProvider,
   ref IntPtr phToken);

   [DllImport("kernel32.dll", CharSet=CharSet.Auto)]
   private static extern bool CloseHandle(IntPtr handle);

}

How To: Use Forms Authentication with Active Directory in ASP.NET 1.1

http://msdn2.microsoft.com/en-us/library/aa302397.aspx

More on Kerberos

By default, the Microsoft® Windows® 2000 operating system uses the Kerberos protocol for authentication. This How To describes how to configure Kerberos delegation, a powerful feature that allows a server, while impersonating a client, to access remote resources on behalf of the client.

Delegation is a very powerful feature and is unconstrained on Windows 2000. It should be used with caution. Computers that are configured to support delegation should be under controlled access to prevent misuse of this feature. Windows .NET Server will support a constrained delegation feature.

When a server impersonates a client, Kerberos authentication generates a delegatelevel token (capable of being used to respond to network authentication challenges from remote computers) if the following conditions are met:

  1. The client account that is being impersonated is not marked as sensitive and cannot be delegated in Microsoft Active Directory® directory service.
    1. Log onto the domain controller using an administrator account.
    2. On the taskbar, click the Start button, point to Programs, point to Administrative Tools, and then click Active Directory Users and Computers.
    3. Under your domain, click the Users folder.
    4. Right-click the user account that is to be delegated, and then click Properties.
    5. Click the Account tab.
    6. Within the Account options list, make sure Account is sensitive and cannot be delegated is not selected.
    7. Click OK to close the Properties dialog box.
  2. The server process account (the user account under which the server process is running, or the computer account if the process is running under the local SYSTEM account) is marked as trusted for delegation in Active Directory.
    This procedure ensures that the account used to run the server process (the process that performs impersonation) is allowed to delegate client accounts. You must configure the user account under which the server process runs, or if the process runs under the local SYSTEM account, you must configure the computer account. Perform the appropriate procedure that follows, depending on if your server process runs under a Windows account or a local SYSTEM account.
    1. To confirm that the server process account is trusted for delegation if the server process runs under a Windows user account
      1. Within the Users folder of Active Directory Users and Computers, right-click the user account that is used to run the server process that will impersonate the client, and then click Properties.
      2. Click the Account tab.
      3. Within the Account options list, click Account is trusted for delegation.
    2. To onfirm that the server process account is trusted for delegation if the server process runs under the local SYSTEM account
      1. Right-click the Computers folder within Active Directory Users and Computers, and then click Properties.
      2. Right-click the server computer (where the process that impersonates the client will be running), and then click Properties.
      3. On the General page, click Trust computer for delegation.

Oct 21, 2007

NTLM vs Kerberos

Both NTLM and Kerberos are forms of Integrated Windows Authentication. NTLM uses a challenge-response protocal to authenticatet the client to the server. It begins when the client attempts to connect to a secure application. The server sends a challenge to the client, and the client responds with a hashed value that the server can use to validate the user and password. All of this is seamless to the end user who simply sees the requested web page open in the browser. NTLM is simple, works well, and developers have often been able to ignore authentication concerns because it was essentially transparent. As security concerns have grown, however, the need for a more secure authentication provider has become increasingly obvious. This is where Kerberos comes in to the picture.

Kerberos is a ticket-based authentication protocol. When a client wants to access a secure application, it requests a ticket from the key distribution center (KDC), which is the server running Active Directory. The KDC then creates a ticket based on the user credentials stored in Active Directory. The ticket is then sent back to the client, which can only use the ticket if it has the correct password. Once the user is authenticated, the ticket is cached locally where it remains until it expires.

Kerberos has several advantages over NTLM that SharePoint developers should care about. First, Kerberos has much better performance than NTLM. Because Kerberos caches credentials, servers can respond more quickly than under NTLM. Kerberos is also more secure than NTLM because the client can essentially authenticate the server as well as have the server authenticate the client. The biggest reason for developers to care about Kerberos, however, is delegation.

Take a step back and consider the process of connecting to a WSS team site using NTLM authentication. We know that NTLM will successfully authenticate a user that has a Windows account and grant access to the team site, which will then appear in the browser. While most of the page content will appear correctly, what if a web part on that page displays information from a line-of-business system with its own separate database? The web part itself must also authenticate against this other database. What credentials does it use? In many cases, we want the web part to use the same credentials as the current user. In other words, we want the web part to impersonate the current user.

SharePoint sites are set up so that web parts will initially impersonate the user accessing them. The user credentials may subsequently be passed to any system residing on the same server as SharePoint or on a different server that requires only a single additional authentication. If the data source requires a second authentication—like when you access a web service, which subsequently accesses a database—the impersonation will fail. This is typically referred to as the “double-hop” issue.

Kerberos, on the other hand, supports impersonation across machines. This type of extended impersonation is known as delegation. If the end user were authenticating to the team site using Kerberos, the web part would successfully authenticate to the line-of-business database and information would appear in the page.

It’s important to point out that Kerberos requires some additional configuration that can only be accomplished if you have rights to Active Directory. Additionally, there are other ways to solve the double-hop issue. Many developers, for example, utilize a SQL Server standard security account buried in a connection string that a web part uses to access the database. The MOSS SSO service can also be used to gain access to any system in the enterprise. The bottom line, however, is that Kerberos is superior to NTLM in several ways and you should utilize it where you can. In the exercise in this chapter, “Creating a Development Environment,” I detail the steps necessary to set up and configure Kerberos.

Oct 10, 2007

Referencing a schema in an XML instance document

            
                            My Life and Times
            Paul McCartney
            July, 1998
            94303-12021-43892
            McMillin Publishing
                    ...
            

Global element vs Local element in xsd

Global element declarations, global type definitions: These are element declarations/type definitions that are immediate children of Local element declarations, local type definitions: These are element declarations/type definitions that are nested within other elements/types.
So what if an element or type is global or local. What practical impact does it have? Answer: only global elements/types can be referenced (i.e., reused). Thus, if an element/type is local then it is effectively invisible to the rest of the schema (and to other schemas).

xsd type inheritance

                 
            
            
            
            
                
                        
            
            
                    
            
            

Oct 9, 2007

Enable InfoPath intellisense in VS.NET IDE

C:\>copy "C:\Program Files\Microsoft office\Office12\Microsoft.Office.InfoPath.x ml" "C:\WINDOWS\assembly\GAC_MSIL\Microsoft.Office.InfoPath\12.0.0.0__71e9bce111 e9429c"
C:\>copy "C:\Program Files\Microsoft office\Office12\Microsoft.Office.Interop.In foPath.SemiTrust.xml" "C:\WINDOWS\assembly\GAC\Microsoft.Office.Interop.InfoPath .SemiTrust\11.0.0.0__71e9bce111e9429c"

Sep 27, 2007

Workflow in WSS3 notes

Because not all the activities availabe in vs.net 2005 designer for windows flow foundation are relevant in the context of SharePoint sites, Windows Sharepoint Server supports a subset of the availabe activites. This include the following:

  • Code.
  • ConditionedActivityGroup
  • Scope
  • Sequence
  • Replicator

WSS provide specific activites that help in three main area: creating, updating, completing, and deleting Sharepoint tasks, creating , updating and deleting sharepoint task alerts, and enablling workflow forms within specific scopes to allow users to modify workflows that are in progress.

Your search cannot be completed because this site is not assigned to an indexer wss3

  • Go to SharePoint Central Administration > Application Management > Content Databases (Underneath SharePoint Web Application Management).
  • Select the correct SharePoint web application – click on the name of the Content databases  - this will open the  “Manage Content Database Settings” page. 
  • Make sure that the Search Server is set on the “Manage Content Database Settings” page.

Sep 23, 2007

Core workflow services

  • Scheduling : Creates and manages the threads used by the runtime engine to execute workflow instances.
  • Commit Work Batch: Manages the transactions used by the runtime engine to maintain consistency between the internal workflow state and external data stores.
  • Persistence: Handles persistence of a workflow instance at the direction of the runtime engine
  • Tracking: Provides the ability to instrucment workflow instances by recording tracking events. Unlike the other core services, you are permitted to register multiple tracking services with the runtime engine.

Sep 20, 2007

Best practice of using Trace and Debug

class Program
            {

            static void Main(string[] args)
            {
            Trace.WriteLine("program started...");
            Trace.Assert(1 != 1, "something wrong");
            //
            MasterTraceSource.TraceInformation("Trace information");
            MasterTraceSource.TraceEvent(TraceEventType.Error, 1, "Error message.");
            MasterTraceSource.TraceEvent(TraceEventType.Verbose, 2, "Warning message.");
            MasterTraceSource.Close();
            return;
            }



            private static TraceSource mySource = new TraceSource("TraceSourceApp");
            public static TraceSource MasterTraceSource
            {
            get
            {
            return mySource;
            }
            }
            }

            

            <?xml version="1.0" encoding="utf-8" ?>
            <configuration>
            <system.diagnostics>
            <sources>
            <!--
            "TraceSourceApp" is the name of TraceSource, new TraceSource("TraceSourceApp")
            switch name reference the name in source switch, it controls whether send
            the information to listners
            -->
            <source name="TraceSourceApp" switchName="sourceSwitch" switchType="System.Diagnostics.SourceSwitch">
            <listeners>
            <add name="consoleListner" />
            <add name="textfileListner"/>
            <remove name="Default"/>
            </listeners>
            </source>
            </sources>

            <trace autoflush="true" indentsize="4">
            <listeners>
            <add name="consoleListner" />
            <remove name="Default" />
            </listeners>
            </trace>

            <switches>
            <add name="sourceSwitch" value="Verbose"/>
            </switches>

            <sharedlisteners>
            <add name="consoleListner" type="System.Diagnostics.ConsoleTraceListener">
            <!--filter is optional, it controls whether to output info to underlining datasource-->
            <filter type="System.Diagnostics.EventTypeFilter" initializeData="Error"/>
            </add>
            <add name="textfileListner" type="System.Diagnostics.TextWriterTraceListener" initializeData="textfileListner.log">
            <filter type="System.Diagnostics.EventTypeFilter" initializeData="Verbose"/>
            </add>
            </sharedListeners>



            </system.diagnostics>
            </configuration>

When you enable system.web.trace enabled ="true" requestLimit="20" writeToDiagnosticsTrace ="true " pageOutput ="false", the trace not only ouput to page or trace.axd, it will be also ouput system.diagnostic.trace listner. But it is much simpler.

Sep 19, 2007

Runtime Services

The workflow runtime engine supports the concept of external services. Services are class instances that you create and register with the runtime during application startup. Each service fulfills a defined purpose. Services come in two varieties: core and local. The functionality provided by core services is defined by Microsoft. In some cases, the workflow runtime will register its own default implementation of a core service if you don’t provide your own. In other cases, the service is optional and a default is not automatically provided for you.

For example, persistence of workflows is important, especially when they are long-running. WF provides the SqlWorkflowPersistenceService class (found in the System.Workflow.Runtime.Hosting namespace) for this purpose. This is a service that handles the persistence duties using a SQL database. When a workflow instance is idled or suspended, its current state can be saved to a database. When the workflow instance is needed again, it is reconstituted within the workflow runtime by retrieving it from the database. By registering this service with the workflow runtime, all of your workflows make use of this functionality. If you prefer another persistence mechanism, you can derive your own persistence service from the abstract WorkflowPersistenceService class and register it with the runtime engine. The persistence service is considered a core workflow service.

Other core services include Commit Work Batch (DefaultWorkflowCommitWorkBatchService), runtime thread management (DefaultWorkflowSchedulerService), and workflow tracking (SqlTrackingService). All of these can be used in their default form or extended and enhanced by you.

On the other hand, local services are developed by you to serve any purpose. One common use of local services is to act as a communications conduit for workflows. The purpose, design, and implementation of this type of service class are completely up to you. You register a local service in a similar way to core services. Once your local service is registered with the runtime, a workflow can invoke methods on the service, or handle events that originate from the service. Local services are an important mechanism used by workflows to communication with other parts of your application.

Runtime Engine

WF includes a core runtime engine that is represented by the WorkflowRuntime class (found in the System.Workflow.Runtime namespace). The workflow runtime is not a self-contained application. Instead, an instance of this class must be hosted by your application in order to execute and manage workflows. You host the workflow runtime and the runtime hosts the individual workflow instances. The workflow runtime is the component that provides an execution environment for the workflow instances.

The WorkflowRuntime class includes methods that permit you to configure and control the workflow runtime. By subscribing to events that are exposed by this class, you can also receive status change notifications. For example, you can receive an event notification when an individual workflow instance starts, terminates, or completes successfully.

The runtime engine provides an execution environment for your workflows. You don’t directly execute workflows within your application. Instead, you ask the runtime engine to create an instance of a workflow which you then instruct to start.

By default, workflows execute asynchronously in a thread that is managed by the runtime engine. This allows you to start multiple workflows from your host application at the same time, with all of them under the control of the runtime engine.

Each workflow can go through multiple execution states throughout its lifetime. For example, all workflows start in the created state and then move into the running state when execution begins. The workflow can also pass into states such as suspended, terminated, or completed. Other events associated with a workflow such as idled, persisted, loaded, or unloaded are possible. It is the runtime engine that manages the life and death of each workflow as it passes through these states.

The runtime engine is also responsible for scheduling and managing execution threads, workflow persistence, workflow transactions (committing of batched work), and workflow tracking. However, while the responsibility for these tasks rests with the runtime engine, it doesn’t actually handle these duties by itself. Each of these tasks has been implemented as a runtime service that you create and register with the runtime engine during application startup. This modular design permits you to swap out a default implementation in favor of one that you’ve developed.

These core services can only be added to the runtime engine before it is started. Once the StartRuntime method is called on the WorkflowRuntime object, you are no longer allowed to add core services. This restriction only applies to these core services and not to local services, which are covered in the next section.

With the exception of the Tracking service, only one instance of each type of service can be added to the runtime engine. And since these services must be added prior to starting the runtime engine, you are unable to replace a service later. However, you are allowed to create and register multiple instances of the tracking service.

With this design, Microsoft is encouraging you to implement your own services in order to customize and extend the core runtime engine. As a starting point, WF includes multiple implementations of some of the services, each one with a different behavior.

Workflow Types

Sequential Workflow

Sequential workflows declare a series of steps that are executed in a prescribed order. Within the workflow, the flow of control is defined with familiar constructs such as if-else branching and while loops.

State Machine Workflows

State machine workflows don’t define a fixed sequence of steps. Instead, they define a set of states, with possible transitions between each state. Each state may contain one or more steps that are executed during state transitions.

How to choose

Both workflow types are suitable for use in a wide variety of applications. You can even use both types within the same application. However, each workflow type targets a different kind of problem. The deciding factor when choosing the workflow type usually comes down to control.

A sequential workflow defines the flow of control within the workflow. Since it specifies the exact sequence of steps within the workflow, it is in control. It works best for system interaction problems where the prescribed steps are known at design time.

State machine workflows don’t define a fixed flow of control within the workflow. The exact sequence of state transitions is controlled by external events. For this reason, state machine workflows are well-suited to problems that involve human interaction. Humans don’t always do things in a prescribed sequence. Modeling a problem involving human interaction requires flexibility, and a state machine workflow provides this.

In most situations, you can make either type of workflow work. However, choosing the wrong workflow type for the problem may result in a more complicated, inelegant solution. You need to choose the workflow type that feels like a natural fit to the problem. For the account withdrawal example, which workflow type is best? You could implement a working application using either type. This doesn’t mean that both workflow types are equally suited to solving the problem.

Workflow basic

Workflow is simply an ordered series of steps that accomplished some defined purpose according to a set of rules. By that definition, what I just described is a workflow. To a developer, the word workflow typically conjures up images of a highly visual environment where complex business rules and flow of control are declared graphically. It’s an environment that allows you to easily visualize and model the activities (steps) that have been declared to solve a problem. And since you can visualize the activities, it’s easier to change, enhance, and customize them. But there is still more to workflows than just the development environment. Workflows represent a different programming model. It’s a model that promotes a clear separation between what to do and when to do it. This separation allows you to change the when without affecting the what. Workflows generally use a declarative programming model rather than a procedural one. With this model, business logic can be encapsulated in discrete components. But the rules that govern the flow of control between components are declarative.
General purpose languages such as C# or Visual Basic can obviously be used to solve business problems. But the workflow programming model really enables you to implement your own domainspecific language. With such a language, you can express business rules using terms that are common to a specific problem domain. Experts in that domain are able to view a workflow and easily understand it, since it is declared in terminology that they understand.

Workflows allow you to easily model system and human interactions. A system interaction is how we as developers would typically approach a problem. You define the steps to execute and write code that controls the sequence of those steps. The code is always in total control. Human interactions are those that involve real live people. The problem is that people are not always as predictable as your code. For example, you might need to model a mortgage loan application. The process might include steps that must be executed by real people in order to complete the process. How much control do you have over the order of those steps? Does the credit approval always occur first, or is it possible for the appraisal to be done first? What about the property survey? Is it done before or after the appraisal? And what activities must be completed before you can schedule the loan closing? The point is that these types of problems are difficult to express using a purely procedural model because human beings are in control. The exact sequence of steps is not always predictable. The workflow model really shines when it comes to solving human interaction problems.

Why windows Workflow Fundation

If you are developing line-of-business applications, you can use WF to orchestrate the business rules. If your application is comprised of a series of human interactions, you can use a WF state machine workflow to implement logic that can react to those interactions. If you need a highly customizable application, you can use the declarative nature of WF workflows to separate the business logic from the execution flow. This allows customization of the flow of control without affecting the underlying business logic. And if you are looking for a better way to encapsulate and independently test your application logic, implement the logic as discrete custom activities that are executed within the WF runtime environment.

  • It provides a flexible and powerful framework for developing workflows. You can spend your time and energy developing your own framework, visual workflow designer, and runtime environment. Or you can use a foundation that Microsoft provides and spend your valuable time solving real business problems.
  • It promotes a consistent way to develop your applications. One workflow looks very similar to the next. This consistency in the programming model and tools improves your productivity when developing new applications and maintaining existing ones.
  • It supports sequential and state machine workflows. Sequential workflows are generally used for system interactions. State machine workflows are well-suited to solving problems that focus on human interaction.
  • It supports workflow persistence. The ability to save and later reload the state of a running workflow is especially important when modeling human interactions.
  • It supports problem solving using a domain-specific model. Microsoft encourages you to develop your own custom activity components. Each custom component addresses a problem that is specific to your problem domain and uses terminology that is common to the domain.
  • It provides a complete workflow ecosystem. In addition to the workflow runtime itself, Microsoft also provides a suite of standard activities, workflow persistence, workflow monitoring and tracking, and a workflow designer that is integrated with Visual Studio which you can also host in your own applications.
  • It is free of charge. Because of this and its tight integration with Visual Studio, it will become the de facto standard workflow framework for Windows developers. A growing community of other WF developers is already in place. They are already sharing their ideas, their custom activity components, and other code.

Sep 13, 2007

System.Transaction

You code can be scoped within a scope or no scope at all. If there is not scope wrapping your code directly, your code will follow its caller's transaction scope. If there is no scope from above, then there is no transaction control in the .net domain.
If there is scope directly scope your code, your code behaves based on the TransactionScopeOption.
If it is TransactionScope.Required, then join the scope above, it is not null, or create scope if it is null.
If it is TransactionScope.RequiresNew, then always create a new Scope.
If its TransactionScope.Suppress, your code will not be part of transaction scope above. This means that if your code execute succesfully, even an error occurred afterward, and the error can rollback ambient transaction, but will not roll back your code. This is different from "no scope" case, in that case your code can be roll back if there is ambient transaction.
When to use what? First you don't care about your caller, your never know who is going to call your code.
If your code require transaction within(for example you have multiple insert/update/delete inside your code), and your code could be part of outer ambient transaction scope, use TransactionScope.Required. If your code require transaction within, but the transaction is independent from existing ambient transaction(outer transaction), becuase your does not want to be affected by the ambient transaction, you should use TransactionScope.ReqiresNew These two options are similar.
The difficult thing is when to use suppress and when to not use scope at all.
If your code do not want to affect by the ambient transaction, and there is only single insert/update/delete, you should use Transaction.Suppress. If there is only single insert/update/delete, and you want to be part of ambient transaction, do not use scope. Below is chart to help you how to use TransacationScope.


Sep 3, 2007

Controlling Changes to Content Types

You can prevent users from making changes to content types in two ways: specify a content type as read-only or define it as sealed.

WSS List limits

Performance with large lists and document libraries has also been another concern with Windows SharePoint Services 2.0. For example, lists often start showing degraded performance when the number of items exceeds 2000. Document libraries have similar performance concerns. However, the rule of thumb for not exceeding 2000 documents applies to each folder within a document library as opposed to the document library itself. Therefore, coming up with a scheme to partition documents across multiple folders within a document library has come to be a popular approach in Windows SharePoint Services 2.0 for dealing with a large number of documents.

wss and asp.net 2.0

Windows SharePoint Services 3.0 provisioning starts at the level of the IIS Web site. Before you can create your first Windows SharePoint Services site, someone must run an administrative procedure to extend 3.0 functionality onto one or more IIS Web sites. In Windows Share- Point Services 2.0, the term virtual server was used to describe an IIS Web site that had been extended with Windows SharePoint Services functionality. To avoid confusion with another Microsoft product of the same name, the Windows SharePoint Services 3.0 documentation now refers to an IIS Web site extended with Windows SharePoint Services functionality as a Web application.

Windows SharePoint Services 2.0 was integrated with IIS 6.0 and ASP.NET 1.1 using an ISAPI filter DLL. This integration technique results in IIS routing requests to Windows SharePoint Services before ASP.NET. This routing has proven to be problematic in certain situations because Windows SharePoint Services takes control of an incoming HTTP request before it has a chance to be properly initialized with ASP.NET context.

The way in which Windows SharePoint Services 3.0 integrates with ASP.NET has been completely redesigned. First of all, Windows SharePoint Services 3.0 is built upon ASP.NET 2.0, which provides significant enhancements over ASP.NET 1.1. Furthermore, the integration between Windows SharePoint Services 3.0 and ASP.NET 2.0 was changed to route incoming requests through the ASP.NET runtime before Windows SharePoint Services. The Windows SharePoint Services team achieved these improvements to the routing infrastructure by removing the ISAPI filter and adding an HttpModule and an HttpHandler that are registered with ASP.NET using standard Web.config entries. This means incoming HTTP requests always enter the ASP.NET run-time environment and are fully initialized with ASP.NET context before they are forwarded to the code written by the Windows SharePoint Services team to carry out Windows SharePoint Services–specific processing.


Also note that when you extend an IIS Web site to become a Windows SharePoint Services Web application, Windows SharePoint Services 3.0 adds a wildcard application map to the IIS metabase. This wildcard application map serves to route all incoming HTTP requests to the ASP.NET runtime regardless of their extension. This wildcard application map is necessary to forward a request for any type of file (for example, .pdf, .doc, .docx) to ASP.NET, which then forwards the request to Windows SharePoint Services for processing.






Another relevant issue of the new architecture has to do with how .aspx pages are parsed and compiled. The .aspx page parser used by ASP.NET 1.1 works only with .aspx pages that reside on the local file system. However, Windows SharePoint Services architecture relies on storing .aspx pages inside a SQL Server database. Since Windows SharePoint Services 2.0 relies on ASP.NET 1.1, the Windows SharePoint Services team had to create their own .aspx page parser. Unfortunately, the .aspx parser of Windows SharePoint Services 2.0 does not support many of the richer features offered by the ASP.NET .aspx page parser.

ASP.NET 2.0 introduced a new pluggable component type known as a virtual path provider. A developer can write a custom component that retrieves .aspx pages for any location including a database such as SQL Server. Once a custom virtual path provider retrieves an .aspx page, it can then hand it off to ASP.NET to conduct the required parsing and compilation. ASP.NET also gives the virtual path provider a good deal of control as to how .aspx pages are parsed and whether they are compiled or run in a non-compile mode.

The Windows SharePoint Services 3.0 team has created their own virtual path provider named SPVirtualPathProvider, which is shown in Figure 1-1 on the next page. As you can see, the SPVirtualPathProvider is able to retrieve .aspx pages from SQL Server and then hand them off to the .aspx page parser supplied by ASP.NET 2.0. That means the Windows SharePoint Services 3.0 team was not required to evolve their .aspx page parser from the previous version. It also means that Windows SharePoint Services 3.0 does not suffer from a reduced feature set with respect to page parsing as it does with Windows SharePoint Services 2.0.

If you’re familiar with the architecture of Windows SharePoint Services 2.0, you’ve probably heard the terms “ghosting” and “unghosting” used in conjunction with the .aspx pages of a Windows SharePoint Services 2.0 site. Page ghosting is a Windows SharePoint Services feature that allows a front-end Web server to store an .aspx page template on its local file system and to share that page template across many different sites. Page ghosting offers performance benefits because Windows SharePoint Services can serve up pages for thousands of sites using a page template stored on the local file system and loaded into memory a single time.Windows SharePoint Services 2.0 supports user modifications to the page template using tools such as Microsoft Office FrontPage 2003. Once a user modifies a page template and saves the changes, a customized version of the page is stored on SQL Server for that particular site. In Windows SharePoint Services 2.0, this is often referred to as unghosting a page.

Windows SharePoint Services 3.0 still supports page templates that live on the Web server as well as customized versions of those page templates that are stored on SQL Server. However, the Windows SharePoint Services team and their documentation have stopped using the terms ghosting and unghosting because they do not translate well into other spoken languages. In Windows SharePoint Services 3.0, the term “uncustomized page” refers to a page template used directly from the local file system of the Web server, and the term “customized page” refers to a modified version of the page template that has been written to the content database for a particular site.

Another change to be aware of is that Microsoft Office FrontPage 2003 has been renamed in its new release as Microsoft Office SharePoint Designer 2007. Like the previous versions of FrontPage, Office SharePoint Designer 2007 is targeted more toward users than developers. However, it’s nonetheless a handy tool to have in your bag of tricks as a Windows SharePoint Services developer.

Office SharePoint Designer 2007 provides a code editor and WYSIWYG designer for customizing .aspx pages within Windows SharePoint Services 3.0 sites. You can also create new pages within a Windows SharePoint Services site that have no corresponding page template on the Web server. Office SharePoint Designer 2007 supports creating lists and document libraries and even supplies a new wizard for creating custom workflows on a Windows Share- Point Services site. Workflows in Windows SharePoint Services 3.0 will be discussed in more depth later in this chapter.

Sep 1, 2007

Object DataSource Update

Assume you create a grid that shows a list of EmployeeDetails objects. You also add a column with edit links. When the user commits an edit, the GridView fills the ObjectDataSource.UpdateParameters collection with one parameter for each property of the EmployeeDetails class, including EmployeeID, FirstName, LastName, and TitleOfCourtesy. Then, the ObjectDataSource searches for a method named UpdateEmployee() in the EmployeeDB class. This method must have the same parameters, with the same names. That means this method is a match:
            public void UpdateEmployee(int employeeID, string firstName, string lastName,
            string titleOfCourtesy)
            { ... }

            
This method is not a match, because the names don’t match exactly:
            public void UpdateEmployee(int id, string first, string last,
            string titleOfCourtesy)
            { ... }
            
This is not a match, because there’s an additional parameter:
            public void UpdateEmployee(int employeeID, string firstName, string lastName,
            string titleOfCourtesy, bool useOptimisticConcurrency)
            { ... }
            
The method matching algorithm is not case-sensitive, and it doesn’t consider the order or data type of the parameters. It simply tries to find a method with the right number of parameters and the same names. As long as that method is present, the update can be committed automatically, without any custom code.
Sometimes you may run into a problem in which the property names of your data class don’t exactly match the parameter names of your update method. If all you need is a simple renaming job, you need to perform the task that was described in the “Updating with Stored Procedures” section earlier, although the syntax is slightly different. First, you define the additional parameters you need, with the correct names. For example, maybe you need to rename the EmployeeDetails.EmployeeID property to a parameter named id in the update method. Here’s the new parameter you need:
            
            
            
            
            

            
Second, you react to the ObjectDataSource.Updating event, setting the value for these parameters and removing the ones you don’t want:
            protected void sourceEmployees_Updating(object sender,
            ObjectDataSourceMethodEventArgs e)
            {
            e.InputParameters["id"] = e.InputParameters["EmployeeID"];
            e.InputParameters.Remove("EmployeeID");
            }
            
Handling Identity Values in an Insert So far, all the examples you’ve seen have used parameters to supply values to an update operation. However, you can also create a parameter to return a result. With the SqlDataSource, you can use this option to get access to an output parameter. With the ObjectDataSource, you can use this technique to capture the return value. To see this in action, it’s worth considering the InsertEmployee() method, which adds an employee record and returns the newly generated unique ID value as an integer: public int InsertEmployee(EmployeeDetails emp) { ... } You don’t need to use the identity value. As you’ve seen already, linked data controls are bound after any updates are committed, which ensures that the updated information always appears in the linked controls. However, you might want to use the identity for another purpose, such as displaying a confirmation message. To capture this identity value, you need to define a parameter:
            
            
            
            
            
            
Now you can retrieve the parameter by responding to the Inserted event, which fires after the insert operation is finished: protected void sourceEmployees_Inserted(object sender, ObjectDataSourceStatusEventArgs e) { if (e.Exception == null) { lblConfirmation.Text = "Inserted record " + e.ReturnValue.ToString(); } }

Parameterized Constructor of ObjectDataSource

Some time if the ObjectDataSource need a parameter instructor you can supply it in the ObjectCreating event.
            protected void sourceEmployees_ObjectCreating(object sender,
            ObjectDataSourceEventArgs e)
            {
            e.ObjectInstance = new DatabaseComponent.EmployeeDB("...");
            }
            

Databinding life cycle

  1. The page object is created (based on the .aspx file).
  2. The page life cycle begins, and the Page.Init and Page.Load events fire.
  3. All other control events fire.
  4. The data source controls perform any updates. If a row is being updated, the Updating and Updated events fire. If a row is being inserted, the Inserting and Inserted events fire. If a row is being deleted, the Deleting and Deleted events fire.
  5. The Page.PreRender event fires.
  6. The data source controls perform any queries and insert the retrieved data in the linked controls. The Selecting and Selected events fire at this point.
  7. The page is rendered and disposed.

Custom Sorting Paging for GridView

ObjectDataSource can do sorting paging without writing code in the aspx. The idea of using object data source is to move the code out of page.
public class MyCustomer
{
   public static int  SelectCount()
   {
     return 20;
   }

   public static DataTable SelectData()
   {
     return SelectData(null);
   }


   public static DataTable SelectData(string sortExpression)
   {
     return SelectData(0, 0);
   }

   public static DataTable SelectData(int maximumRows, int startRowIndex)
   {
      return SelectData(null, maximumRows, startRowIndex);
   }

   public static DataTable SelectData(string sortExpression, int maximumRows, int startRowIndex)
   {
     DataTable dt = new DataTable();
     dt.Columns.Add("id", typeof(int));
     dt.Columns.Add("name", typeof(string));
     int upbound = SelectCount();
     if (maximumRows != 0)
     {
       if (upbound > maximumRows)
       {
         upbound = maximumRows;
       }
     }

     for (int i = 0; i < upbound; i++)
     {
        int current = i + startRowIndex + 1;
        dt.Rows.Add(new object[] { current, current.ToString() });
     }
     return dt;
  }
}
Because GridView does not have VirtualItemCount(DataGrid has), manually binding to a datasource is very difficult. This aritcal provide a solution the implement the VirtualItemCount.

Reading Binary Data Efficiently

protected void Page_Load(object sender, System.EventArgs e)
{
  string connectionString =
  WebConfigurationManager.ConnectionStrings["Pubs"].ConnectionString;
  SqlConnection con = new SqlConnection(connectionString);
  string SQL = "SELECT logo FROM pub_info WHERE pub_id='1389'";
  SqlCommand cmd = new SqlCommand(SQL, con);
  try
  {
    con.Open();
    SqlDataReader r =
            cmd.ExecuteReader(CommandBehavior.SequentialAccess);
    if (r.Read())
    {
      int bufferSize = 100; // Size of the buffer.
      byte[] bytes = new byte[bufferSize]; // The buffer of data.
      long bytesRead; // The number of bytes read.
      long readFrom = 0; // The starting index
      // Read the field 100 bytes at a time.
      do
      {
            bytesRead = r.GetBytes(0, readFrom, bytes, 0, bufferSize);
            Response.BinaryWrite(bytes);
            readFrom += bufferSize;
       } while (bytesRead == bufferSize);
    }
    r.Close();
}
finally
{
   con.Close();
}
}

GridView RowCommand event

In the RowDeleting, SelectedIndexChanging, RowUpdating, RowEditing, you can retrieve row index like e.RowIndex, or e.NewSelectedIndex, e.NewEditIndex. But GridVeiwCommandEventArgs has not index attribute. So how can you get the row index in RowCommand event? It turns out that GridViewComandEventArgs has an additional e.CommandArgument, in the events above it will return the row index. Here is senario, I want to have a button, to show related info, I can want to have command name as showInfo. But in the RowCommand event, I can not know the row index.? My solution is don't use command name as "showInfo", use "Select", this will trigger the SelectedIndexChanging event, in the event you can have e.NewSelectedIndex. But you should also you have two action button, one is showUserInfo, another is showOtherInfo, if both use select command, how to differenciate them? I will do like string commandArgs; protected void gv_RowCommand(object sender, GridViewCommandEventArgs e) { commandName = e.CommandArgument; } protected void gv_SelectedIndexChanging(object sender, GridViewSelectEventArgs e) { int rowIndex = e.NewSelectedIndex; if (commandArgs == "showUserInfo") { //.. } else if (commandArgs == "showOtherInfo") { //.. } } This works because RowCommand happen before Select changing event.

Aug 27, 2007

Ten Step to professional web type

  • Do not use too many fonts on one page.
  • Do not use centered, right, or full-justified text on a web page
  • Give your text room to breath.
  • Do not use bold type for paragraph text.
  • Never, never, never use all capitalized text.
  • Use plenty headlines and subheads
  • Divide your page into columns
  • Don't use huge text.
  • Do not use underlined text for anything but a link.
  • For graphic text, learn correct typography.

Sharepoint Area

Area a like top level site. Topic, news, sites are three area created by default. They share the same feature that area has. An area has three default components(portal lists, document library, and image library). Topic has shortcut to create subarea. Topic area does not use any of this component. News use portal lists to add news. Sites area does not use default component. Instead it add one more custom lists(the name is Sites). Sites area has a shortcut to create site, in the process of site creation, you can add the short cut this site. This shortcut is actually an record in the "Sites" custome list. The root area is home. Topic, news, sites area are all

Why jQuery works

Its fundamental philosophy, centering on collections of DOM Elements, puts it squarely where most Javascript developers program most. By contrast, other frameworks, like Prototype and Dojo, take a functional approach. Sure, they’re fully capable of addressing DOM Elements, just like jQuery, but these other frameworks make entirely different choices. Prototype, for one, fancies itself a true Object Oriented extension of Javascript’s paltry native offerings. In pursuit of true object-orientedness, its developers have put a substantial amount of time into developing object-oriented classes for different types of functionality. A class for forms, a class for elements, a class for events. It goes on and on. It’s perfectly possible to write clean, good-looking Prototype code. And Prototype can emulate one of the best things about jQuery’s focus on DOM Element collections: its chainability. But jQuery conceives of modern Javascript development the way many major players in the Javascript community are starting to see it: as DOM Scripting first and foremost. For those of us whose Javascriptprogramming focuses primarily on page elements, and Isuspect that’s most of us, jQuery makes the work dead simple.

Most jQuery methods start with the collection of elements, using the handy support for CSS3, XPath, and a slew of custom expressions (like :visible, which returns only visible elements, and :checked, which returns only checked form fields). Once you obtain a group of elements, the fun begins. Add .fadeIn(“slow”) and each of the elements will fade in—slowly. But we’re not done. Without skipping a beat, add .addClass(“thisIsDamnFun”). Each element will get the class “thisIsDamnFun.” It is, isn’t it? And it can go on from there. Add .click(function() { alert(“Hello”) }); to throw up an alert box when any of the elements are clicked. Add .append(“Hello”) and the word hello will be appended to the end of each of the matched elements. Cool, huh?

Now that you’ve seen the power of jQuery methods, how do we get the element collection in the first place? Happily, we have CSS (1-3) at our disposal, as well as a limited subset of XPath, and some nice custom expressions thrown in for good measure. When I say CSS3, I mean it. jQuery supports the ~ selector, :not(expr), attributes via [@ attr=’whatever’]. XPath support is a bit more limited, but most of the good stuff is here. The / and // operators are available, as are parent and preceding sibling axes. jQuery supports :first, :last, and :eq(n), a slimmed down version of [position() = n]. Finally, jQuery supports testing for contained elements via [tag- Name]. And because the jQuery parses operates on XHTML, it can be easily co-opted for parsing raw XML from AJAX requests. And of course, jQuery has the full complement of AJAX methods through $.ajax, and $().load.

cross-site group

The cross site group is crossing the sites under a top level site, but not crossing top level site.

We don't need magic.

Today, a coworker of another team needed to read the code of my team. He found a class called MagicObject. After tracing for a while, he still couldn't figure it out why it exists. So he asked me, what the hell this class is for? Although, I know its purpose, and I know who wrote it, I said: "If you can understand magic, that is not magic, its existence is to confuse people." If fact, this is the habit of those "creative" programmer. Code is not only a tool for coder to communicate with computer, but it is tool to communicate to your team member, if it is so secret so magic, shouldn't it be locked, and never be viewed? In fact, the job of programmer is to demystify your design via code. I have to say, Let the magic go to hell.

Ten good practices for writing JavaScript in 2005

Ten good practices for writing JavaScript in 2005 Very good!

dbcc freeprocecache

You can use "dbcc freeprocecache" to clear up the procedure cache to ensure the compilation will take place afterwards. How ever it has its drawback, because it will delete the contents of the procedure cache, and all statements and stored procedures will have to be compiled anew.

AJAX basic

            var xmlHttp = null;
            if (window.XMLHttpRequest) { // IE7, Mozilla, Safari, Opera, etc.
            xmlHttp = new XMLHttpRequest();
            } else if (window.ActiveXObject) {
            try{
            xmlHttp = new ActiveXObject("Microsoft.XMLHTTP"); //IE 5.x, 6
            }
            catch(e) {}
            }

            function sendRequest(url) {
            if (xmlHttp) {
            xmlHttp.open("GET", url, true); // true = async
            xmlHttp.onreadystatechange = onCallback;
            xmlHttp.setRequestHeader('Content-type',
            'application/x-www-form-urlencoded');
            xmlHttp.send(null);
            }
            }

            function onCallback() {
            if (xmlHttp.readyState == 4) {
            if (xmlHttp.status == 200){
            var r = document.getElementById('results');
            r.innerHTML = xmlHttp.responseText;
            }
            else {
            alert('Error: ' + xmlHttp.status);
            }
            }
            }
            

Investigate your SQL's compilation

Following script to investigate the compiler events and corresponding counters and values of your statement
            SET NOCOUNT ON;
            USE Northwind; -- use your database name here
            DBCC FREEPROCCACHE; -- empty the procedure cache
            GO
            -- we will use tempdb..OptStats table to capture
            -- the information from several executions
            -- of sys.dm_exec_query_optimizer_info
            IF (OBJECT_ID('tempdb..OptStats') IS NOT NULL)
            DROP TABLE tempdb..OptStats;
            GO
            -- the purpose of this statement is
            -- to create the temporary table tempdb..OptStats
            SELECT 0 AS Run, *
            INTO tempdb..OptStats
            FROM sys.dm_exec_query_optimizer_info;
            GO
            -- this will populate the procedure cache
            -- with this statement's plan so that it will not
            -- generate any optimizer events when executed
            -- next time
            -- the following GO is intentional to ensure
            -- the query plan reuse will happen for the following
            -- INSERT for its next invocation in this script
            GO
            INSERT INTO tempdb..OptStats
            SELECT 1 AS Run, *
            FROM sys.dm_exec_query_optimizer_info;
            GO
            -- same reason as above; observe the "2" replaced "1"
            -- therefore we will have different plan
            GO
            INSERT INTO tempdb..OptStats
            SELECT 2 AS Run, *
            FROM sys.dm_exec_query_optimizer_info;
            GO
            -- empty the temporary table
            TRUNCATE TABLE tempdb..OptStats
            GO
            -- store the "before run" information
            -- in the temporary table with the output
            -- of sys.dm_exec_query_optimizer_info
            -- with value "1" in the column Run
            GO
            INSERT INTO tempdb..OptStats
            SELECT 1 AS Run, *
            FROM sys.dm_exec_query_optimizer_info;
            GO
            -- your statement or batch is executed here

            /*** the following is an example
            SELECT C.CustomerID, COUNT(O.OrderID) AS NumOrders
            FROM dbo.Customers AS C
            LEFT OUTER JOIN dbo.Orders AS O
            ON C.CustomerID = O.CustomerID
            WHERE C.City = 'London'
            GROUP BY C.CustomerID
            HAVING COUNT(O.OrderID) > 5
            ORDER BY NumOrders;
            ***/
            GO
            -- store the "after run" information
            -- in the temporary table with the output
            -- of sys.dm_exec_query_optimizer_info
            -- with value "2" in the column Run
            GO
            INSERT INTO tempdb..OptStats
            SELECT 2 AS Run, *
            FROM sys.dm_exec_query_optimizer_info;
            GO
            -- extract all "events" that changed either
            -- the Occurrence or Value column value between
            -- the Runs 1 and 2 from the temporary table.
            -- Display the values of Occurrence and Value
            -- for all such events before (Run1Occurrence and
            -- Run1Value) and after (Run2Occurrence and
            -- Run2Value) executing your batch or query.
            -- This is the result set generated by the script.
            WITH X (Run,Counter, Occurrence, Value)
            AS
            (
            SELECT *
            FROM tempdb..OptStats WHERE Run=1
            ),
            Y (Run,Counter, Occurrence, Value)
            AS
            (
            SELECT *
            FROM tempdb..OptStats
            WHERE Run=2
            )
            SELECT X.Counter, Y.Occurrence-X.Occurrence AS Occurrence,
            CASE (Y.Occurrence-X.Occurrence)
            WHEN 0 THEN (Y.Value*Y.Occurrence-X.Value*X.Occurrence)
            ELSE (Y.Value*Y.Occurrence-X.Value*X.Occurrence)/(Y.Occurrence-X.Occurrence)
            END AS Value
            FROM X JOIN Y
            ON (X.Counter=Y.Counter
            AND (X.Occurrence<>Y.Occurrence OR X.Value<>Y.Value));
            GO
            -- drop the temporary table
            DROP TABLE tempdb..OptStats;
            GO

            

Code ScriptManager Generate

Code ScriptManager Generate


            



            
            
            Untitled Page
            
            
            

sys.application

If you enabled the Microsoft Ajax Library, a new lifecycle starts on the client side. As soon as the browser loads the main script file, MicrosoftAjax.js, the client runtime creates a global JavaScript object—the Application object—and stores it in a global variable called Sys.Application.This new object becomes the brains of a web page in the browser. Despite its name, it plays a role similar to the Page object on the server side. Once the Page object is done on the server side, the processing on the client side is delegated to Sys.Application.
Providing a centralized place to execute the client code—This goal is reached by defining a custom page lifecycle on the client. As you’ll see in a moment, the client page lifecycle starts when the browser loads the page and ends when the user navigates away from the page or the page is reloaded. When each stage in the lifecycle is entered, the Application object raises a corresponding event.
Hosting the client components instantiated in the page—Once instantiated, client components become children of the Application object and can be easily accessed through the Application object. Also, they’re automatically disposed by the Application object when the web page is unloaded by the browser.
The primary tenet behind components is code reusability. Components implement a well-defined set of interfaces that allows them to interact with other components and to be interchanged between applications. Thanks to the base interfaces, the code encapsulated by components can change at any time without affecting the other processing logic.
The Microsoft Ajax Library provides specialized client classes that simplify the authoring of client components. The group of classes related to component development is called the client component model and closely mirrors the model in use in the .NET framework. In this way, you can write component-oriented client applications using JavaScript code.

javascript delegate callback

In c#, you can create a delegate like the code below.
            class Animal
            {
            public string Greeting = "Hello,";
            public void Greet(string message)
            {
            Console.WriteLine(this.Greeting + message);
            }

            }

            class Client
            {
            public SampleDelegate TestDelegate;
            public void Demo()
            {
            Animal a = new Animal();
            TestDelegate = new SampleDelegate(a.Greet);
            TestDelegate("Ainmal");
            }
            }
            
But in javascript , "this" is in the context of caller. In asp.net ajax library, there is a function to create a delegate.
            Function.createDelegate = function Function$createDelegate(instance, method)
            {
            /// 
            /// 
            /// 
            var e = Function._validateParams(arguments, [
            {name: "instance", mayBeNull: true},
            {name: "method", type: Function}
            ]);
            if (e) throw e;
            //it is not the same as return method.apply(instance, arguments);
            return function() {
            return method.apply(instance, arguments);
            }
            }
            
Please note that is return a function pointer, and the function call the method in the context of instance.
            function pageLoad() {
            // test is a property of the window object.
            this.test = "I'm a test string!";

            // Create a delegate that points to onButtonClick and pass the
            // window object as the first argument.
            var clickDelegate = Function.createDelegate(this, onButtonClick);

            // Handle the click event with the delegate.
            $addHandler($get('testButton'), 'click', clickDelegate);

            //if we call,   $addHandler($get('testButton'), 'click', onButtonClick);
            // the "this" inside of the method will reference to  button, and its
            //test value is null

            }

            function onButtonClick() {
            // Access the window object.
            alert(this.test);
            }
            
The Function.createDelegate method is useful because you don’t have to store in a global variable—or even in a DOM element—the context that you want to access in the event handler.
            $addHandlers(buttonElement, { click:onButtonClick,
            mouseover:onMouseOver }, this);
            //"this" is context inside of event
            
Callback function in asp.net ajax library is similar to delegate, but they solve a different problem. Delegate solve the problem of "this" context, so that it accept instance parameter, the "this" will refer to instance. "this" in Callback function is still in the context of caller. But Callback function accept a context parameter. Both concept is very useful in ajax, because it is all about reuse. The section of code can be reuse in different "conext"!
            Function.createCallback = function Function$createCallback(method, context) {
            /// 
            /// 
            /// 
            var e = Function._validateParams(arguments, [
            {name: "method", type: Function},
            {name: "context", mayBeNull: true}
            ]);
            if (e) throw e;



            return function() {
            var l = arguments.length;
            if (l > 0) {
            var args = [];
            for (var i = 0; i < l; i++) {
            args[i] = arguments[i];
            }
            args[l] = context;
            return method.apply(this, args);
            }
            return method.call(this, context);
            }
            }

            //example
            function pageLoad() {
            // The context object.
            var context = { date : new Date() };

            // Create a callback that points to onButtonClick and pass
            // the context object.
            var clickCallback =
            Function.createCallback(onButtonClick, context);

            // Attach a handler to the click event of the button.
            $addHandler($get('myButton'), 'click', clickCallback);
            }

            function onButtonClick(evt, context) {
            // Here we can access both the event object and
            // the context.
            var loadTime = context.date;
            var elapsed = new Date() - loadTime;
            alert(this);
            alert((elapsed / 1000) + ' seconds');
            }
            

Aug 26, 2007

Inline box

Inline boxes are laid out in a line horizontally. The horizontal spacing can be adjusted using horizontal padding, borders, and margines. Howerver, vertical padding, borders, and margines will have no effect on the height of a inline box. The horizontal box formed by a line is called a line box, and a line box will always be tall enough for all the line boxes it contains. Howerver, setting the line height can increase the height of this box.

style, currentStyle, runtimeStyle

style reprensent the style attribute of an object. But think of it as javascript object. If an element has no style attribute defined, element.color will be undefined. currentStyle is IE object that attached to an element. In firefox it is not accessible.It is a read only object, so you can get the style no matter how the style is created(either from css style sheet or the style attribute of the object.) runtimeStyle is also IE proprietary object, it is meant to be used for write

            function getStyle(elem, name)
            {
            if (elem.style[name])
            {
            return elem.style[name];
            }
            else if (elem.currentStyle)
            {
            return elem.currentStyle[name];
            }
            else if (document.defaultView && document.defaultView.getComputedStyle)
            {
            name = name.replace(/([A-Z])/g,"-$1");
            name = name.toLowerCase();
            var s = document.defaultView.getComputedStyle(elem,"");
            return s && s.getPropertyValue(name);
            }
            else
            {
            return null;
            }
            }

            

A CUSTOM TOOPTIPS

            function showTip(oEvent)
            {
            var oDiv = document.getElementById(“divTip1”);
            oDiv.style.visibility = “visible”;
            oDiv.style.left = oEvent.clientX + 5;
            oDiv.style.top = oEvent.clientY + 5;
            }
            

Aug 24, 2007

floating setting affected by the structure

We all want to express our document in a semantic way, but some time the structure of the document does affect the css. For example,


            
            
...

If we change the structure of the document, we put nav beind content, we need to do more styling. And it is not straight forward. like you need to float both element, and you need to use negative margin for the nav item. I think the performance is slower. The only questionable benifit, it put more emphasis on the content element.

            

Aug 22, 2007

Reference assembly generated in asp.net code file

<%@ Register TagPrefix="msdn" Namespace="MsdnMag" Assembly="__code" %>
This is the way that asp.net referring to an autocompiled assembly.


Powered by ScribeFire.