Sep 27, 2007

Workflow in WSS3 notes

Because not all the activities availabe in 2005 designer for windows flow foundation are relevant in the context of SharePoint sites, Windows Sharepoint Server supports a subset of the availabe activites. This include the following:

  • Code.
  • ConditionedActivityGroup
  • Scope
  • Sequence
  • Replicator

WSS provide specific activites that help in three main area: creating, updating, completing, and deleting Sharepoint tasks, creating , updating and deleting sharepoint task alerts, and enablling workflow forms within specific scopes to allow users to modify workflows that are in progress.

Your search cannot be completed because this site is not assigned to an indexer wss3

  • Go to SharePoint Central Administration > Application Management > Content Databases (Underneath SharePoint Web Application Management).
  • Select the correct SharePoint web application – click on the name of the Content databases  - this will open the  “Manage Content Database Settings” page. 
  • Make sure that the Search Server is set on the “Manage Content Database Settings” page.

Sep 23, 2007

Core workflow services

  • Scheduling : Creates and manages the threads used by the runtime engine to execute workflow instances.
  • Commit Work Batch: Manages the transactions used by the runtime engine to maintain consistency between the internal workflow state and external data stores.
  • Persistence: Handles persistence of a workflow instance at the direction of the runtime engine
  • Tracking: Provides the ability to instrucment workflow instances by recording tracking events. Unlike the other core services, you are permitted to register multiple tracking services with the runtime engine.

Sep 20, 2007

Best practice of using Trace and Debug

class Program

            static void Main(string[] args)
            Trace.WriteLine("program started...");
            Trace.Assert(1 != 1, "something wrong");
            MasterTraceSource.TraceInformation("Trace information");
            MasterTraceSource.TraceEvent(TraceEventType.Error, 1, "Error message.");
            MasterTraceSource.TraceEvent(TraceEventType.Verbose, 2, "Warning message.");

            private static TraceSource mySource = new TraceSource("TraceSourceApp");
            public static TraceSource MasterTraceSource
            return mySource;


            <?xml version="1.0" encoding="utf-8" ?>
            "TraceSourceApp" is the name of TraceSource, new TraceSource("TraceSourceApp")
            switch name reference the name in source switch, it controls whether send
            the information to listners
            <source name="TraceSourceApp" switchName="sourceSwitch" switchType="System.Diagnostics.SourceSwitch">
            <add name="consoleListner" />
            <add name="textfileListner"/>
            <remove name="Default"/>

            <trace autoflush="true" indentsize="4">
            <add name="consoleListner" />
            <remove name="Default" />

            <add name="sourceSwitch" value="Verbose"/>

            <add name="consoleListner" type="System.Diagnostics.ConsoleTraceListener">
            <!--filter is optional, it controls whether to output info to underlining datasource-->
            <filter type="System.Diagnostics.EventTypeFilter" initializeData="Error"/>
            <add name="textfileListner" type="System.Diagnostics.TextWriterTraceListener" initializeData="textfileListner.log">
            <filter type="System.Diagnostics.EventTypeFilter" initializeData="Verbose"/>


When you enable system.web.trace enabled ="true" requestLimit="20" writeToDiagnosticsTrace ="true " pageOutput ="false", the trace not only ouput to page or trace.axd, it will be also ouput system.diagnostic.trace listner. But it is much simpler.

Sep 19, 2007

Runtime Services

The workflow runtime engine supports the concept of external services. Services are class instances that you create and register with the runtime during application startup. Each service fulfills a defined purpose. Services come in two varieties: core and local. The functionality provided by core services is defined by Microsoft. In some cases, the workflow runtime will register its own default implementation of a core service if you don’t provide your own. In other cases, the service is optional and a default is not automatically provided for you.

For example, persistence of workflows is important, especially when they are long-running. WF provides the SqlWorkflowPersistenceService class (found in the System.Workflow.Runtime.Hosting namespace) for this purpose. This is a service that handles the persistence duties using a SQL database. When a workflow instance is idled or suspended, its current state can be saved to a database. When the workflow instance is needed again, it is reconstituted within the workflow runtime by retrieving it from the database. By registering this service with the workflow runtime, all of your workflows make use of this functionality. If you prefer another persistence mechanism, you can derive your own persistence service from the abstract WorkflowPersistenceService class and register it with the runtime engine. The persistence service is considered a core workflow service.

Other core services include Commit Work Batch (DefaultWorkflowCommitWorkBatchService), runtime thread management (DefaultWorkflowSchedulerService), and workflow tracking (SqlTrackingService). All of these can be used in their default form or extended and enhanced by you.

On the other hand, local services are developed by you to serve any purpose. One common use of local services is to act as a communications conduit for workflows. The purpose, design, and implementation of this type of service class are completely up to you. You register a local service in a similar way to core services. Once your local service is registered with the runtime, a workflow can invoke methods on the service, or handle events that originate from the service. Local services are an important mechanism used by workflows to communication with other parts of your application.

Runtime Engine

WF includes a core runtime engine that is represented by the WorkflowRuntime class (found in the System.Workflow.Runtime namespace). The workflow runtime is not a self-contained application. Instead, an instance of this class must be hosted by your application in order to execute and manage workflows. You host the workflow runtime and the runtime hosts the individual workflow instances. The workflow runtime is the component that provides an execution environment for the workflow instances.

The WorkflowRuntime class includes methods that permit you to configure and control the workflow runtime. By subscribing to events that are exposed by this class, you can also receive status change notifications. For example, you can receive an event notification when an individual workflow instance starts, terminates, or completes successfully.

The runtime engine provides an execution environment for your workflows. You don’t directly execute workflows within your application. Instead, you ask the runtime engine to create an instance of a workflow which you then instruct to start.

By default, workflows execute asynchronously in a thread that is managed by the runtime engine. This allows you to start multiple workflows from your host application at the same time, with all of them under the control of the runtime engine.

Each workflow can go through multiple execution states throughout its lifetime. For example, all workflows start in the created state and then move into the running state when execution begins. The workflow can also pass into states such as suspended, terminated, or completed. Other events associated with a workflow such as idled, persisted, loaded, or unloaded are possible. It is the runtime engine that manages the life and death of each workflow as it passes through these states.

The runtime engine is also responsible for scheduling and managing execution threads, workflow persistence, workflow transactions (committing of batched work), and workflow tracking. However, while the responsibility for these tasks rests with the runtime engine, it doesn’t actually handle these duties by itself. Each of these tasks has been implemented as a runtime service that you create and register with the runtime engine during application startup. This modular design permits you to swap out a default implementation in favor of one that you’ve developed.

These core services can only be added to the runtime engine before it is started. Once the StartRuntime method is called on the WorkflowRuntime object, you are no longer allowed to add core services. This restriction only applies to these core services and not to local services, which are covered in the next section.

With the exception of the Tracking service, only one instance of each type of service can be added to the runtime engine. And since these services must be added prior to starting the runtime engine, you are unable to replace a service later. However, you are allowed to create and register multiple instances of the tracking service.

With this design, Microsoft is encouraging you to implement your own services in order to customize and extend the core runtime engine. As a starting point, WF includes multiple implementations of some of the services, each one with a different behavior.

Workflow Types

Sequential Workflow

Sequential workflows declare a series of steps that are executed in a prescribed order. Within the workflow, the flow of control is defined with familiar constructs such as if-else branching and while loops.

State Machine Workflows

State machine workflows don’t define a fixed sequence of steps. Instead, they define a set of states, with possible transitions between each state. Each state may contain one or more steps that are executed during state transitions.

How to choose

Both workflow types are suitable for use in a wide variety of applications. You can even use both types within the same application. However, each workflow type targets a different kind of problem. The deciding factor when choosing the workflow type usually comes down to control.

A sequential workflow defines the flow of control within the workflow. Since it specifies the exact sequence of steps within the workflow, it is in control. It works best for system interaction problems where the prescribed steps are known at design time.

State machine workflows don’t define a fixed flow of control within the workflow. The exact sequence of state transitions is controlled by external events. For this reason, state machine workflows are well-suited to problems that involve human interaction. Humans don’t always do things in a prescribed sequence. Modeling a problem involving human interaction requires flexibility, and a state machine workflow provides this.

In most situations, you can make either type of workflow work. However, choosing the wrong workflow type for the problem may result in a more complicated, inelegant solution. You need to choose the workflow type that feels like a natural fit to the problem. For the account withdrawal example, which workflow type is best? You could implement a working application using either type. This doesn’t mean that both workflow types are equally suited to solving the problem.

Workflow basic

Workflow is simply an ordered series of steps that accomplished some defined purpose according to a set of rules. By that definition, what I just described is a workflow. To a developer, the word workflow typically conjures up images of a highly visual environment where complex business rules and flow of control are declared graphically. It’s an environment that allows you to easily visualize and model the activities (steps) that have been declared to solve a problem. And since you can visualize the activities, it’s easier to change, enhance, and customize them. But there is still more to workflows than just the development environment. Workflows represent a different programming model. It’s a model that promotes a clear separation between what to do and when to do it. This separation allows you to change the when without affecting the what. Workflows generally use a declarative programming model rather than a procedural one. With this model, business logic can be encapsulated in discrete components. But the rules that govern the flow of control between components are declarative.
General purpose languages such as C# or Visual Basic can obviously be used to solve business problems. But the workflow programming model really enables you to implement your own domainspecific language. With such a language, you can express business rules using terms that are common to a specific problem domain. Experts in that domain are able to view a workflow and easily understand it, since it is declared in terminology that they understand.

Workflows allow you to easily model system and human interactions. A system interaction is how we as developers would typically approach a problem. You define the steps to execute and write code that controls the sequence of those steps. The code is always in total control. Human interactions are those that involve real live people. The problem is that people are not always as predictable as your code. For example, you might need to model a mortgage loan application. The process might include steps that must be executed by real people in order to complete the process. How much control do you have over the order of those steps? Does the credit approval always occur first, or is it possible for the appraisal to be done first? What about the property survey? Is it done before or after the appraisal? And what activities must be completed before you can schedule the loan closing? The point is that these types of problems are difficult to express using a purely procedural model because human beings are in control. The exact sequence of steps is not always predictable. The workflow model really shines when it comes to solving human interaction problems.

Why windows Workflow Fundation

If you are developing line-of-business applications, you can use WF to orchestrate the business rules. If your application is comprised of a series of human interactions, you can use a WF state machine workflow to implement logic that can react to those interactions. If you need a highly customizable application, you can use the declarative nature of WF workflows to separate the business logic from the execution flow. This allows customization of the flow of control without affecting the underlying business logic. And if you are looking for a better way to encapsulate and independently test your application logic, implement the logic as discrete custom activities that are executed within the WF runtime environment.

  • It provides a flexible and powerful framework for developing workflows. You can spend your time and energy developing your own framework, visual workflow designer, and runtime environment. Or you can use a foundation that Microsoft provides and spend your valuable time solving real business problems.
  • It promotes a consistent way to develop your applications. One workflow looks very similar to the next. This consistency in the programming model and tools improves your productivity when developing new applications and maintaining existing ones.
  • It supports sequential and state machine workflows. Sequential workflows are generally used for system interactions. State machine workflows are well-suited to solving problems that focus on human interaction.
  • It supports workflow persistence. The ability to save and later reload the state of a running workflow is especially important when modeling human interactions.
  • It supports problem solving using a domain-specific model. Microsoft encourages you to develop your own custom activity components. Each custom component addresses a problem that is specific to your problem domain and uses terminology that is common to the domain.
  • It provides a complete workflow ecosystem. In addition to the workflow runtime itself, Microsoft also provides a suite of standard activities, workflow persistence, workflow monitoring and tracking, and a workflow designer that is integrated with Visual Studio which you can also host in your own applications.
  • It is free of charge. Because of this and its tight integration with Visual Studio, it will become the de facto standard workflow framework for Windows developers. A growing community of other WF developers is already in place. They are already sharing their ideas, their custom activity components, and other code.

Sep 13, 2007


You code can be scoped within a scope or no scope at all. If there is not scope wrapping your code directly, your code will follow its caller's transaction scope. If there is no scope from above, then there is no transaction control in the .net domain.
If there is scope directly scope your code, your code behaves based on the TransactionScopeOption.
If it is TransactionScope.Required, then join the scope above, it is not null, or create scope if it is null.
If it is TransactionScope.RequiresNew, then always create a new Scope.
If its TransactionScope.Suppress, your code will not be part of transaction scope above. This means that if your code execute succesfully, even an error occurred afterward, and the error can rollback ambient transaction, but will not roll back your code. This is different from "no scope" case, in that case your code can be roll back if there is ambient transaction.
When to use what? First you don't care about your caller, your never know who is going to call your code.
If your code require transaction within(for example you have multiple insert/update/delete inside your code), and your code could be part of outer ambient transaction scope, use TransactionScope.Required. If your code require transaction within, but the transaction is independent from existing ambient transaction(outer transaction), becuase your does not want to be affected by the ambient transaction, you should use TransactionScope.ReqiresNew These two options are similar.
The difficult thing is when to use suppress and when to not use scope at all.
If your code do not want to affect by the ambient transaction, and there is only single insert/update/delete, you should use Transaction.Suppress. If there is only single insert/update/delete, and you want to be part of ambient transaction, do not use scope. Below is chart to help you how to use TransacationScope.

Sep 3, 2007

Controlling Changes to Content Types

You can prevent users from making changes to content types in two ways: specify a content type as read-only or define it as sealed.

WSS List limits

Performance with large lists and document libraries has also been another concern with Windows SharePoint Services 2.0. For example, lists often start showing degraded performance when the number of items exceeds 2000. Document libraries have similar performance concerns. However, the rule of thumb for not exceeding 2000 documents applies to each folder within a document library as opposed to the document library itself. Therefore, coming up with a scheme to partition documents across multiple folders within a document library has come to be a popular approach in Windows SharePoint Services 2.0 for dealing with a large number of documents.

wss and 2.0

Windows SharePoint Services 3.0 provisioning starts at the level of the IIS Web site. Before you can create your first Windows SharePoint Services site, someone must run an administrative procedure to extend 3.0 functionality onto one or more IIS Web sites. In Windows Share- Point Services 2.0, the term virtual server was used to describe an IIS Web site that had been extended with Windows SharePoint Services functionality. To avoid confusion with another Microsoft product of the same name, the Windows SharePoint Services 3.0 documentation now refers to an IIS Web site extended with Windows SharePoint Services functionality as a Web application.

Windows SharePoint Services 2.0 was integrated with IIS 6.0 and ASP.NET 1.1 using an ISAPI filter DLL. This integration technique results in IIS routing requests to Windows SharePoint Services before ASP.NET. This routing has proven to be problematic in certain situations because Windows SharePoint Services takes control of an incoming HTTP request before it has a chance to be properly initialized with ASP.NET context.

The way in which Windows SharePoint Services 3.0 integrates with ASP.NET has been completely redesigned. First of all, Windows SharePoint Services 3.0 is built upon ASP.NET 2.0, which provides significant enhancements over ASP.NET 1.1. Furthermore, the integration between Windows SharePoint Services 3.0 and ASP.NET 2.0 was changed to route incoming requests through the ASP.NET runtime before Windows SharePoint Services. The Windows SharePoint Services team achieved these improvements to the routing infrastructure by removing the ISAPI filter and adding an HttpModule and an HttpHandler that are registered with ASP.NET using standard Web.config entries. This means incoming HTTP requests always enter the ASP.NET run-time environment and are fully initialized with ASP.NET context before they are forwarded to the code written by the Windows SharePoint Services team to carry out Windows SharePoint Services–specific processing.

Also note that when you extend an IIS Web site to become a Windows SharePoint Services Web application, Windows SharePoint Services 3.0 adds a wildcard application map to the IIS metabase. This wildcard application map serves to route all incoming HTTP requests to the ASP.NET runtime regardless of their extension. This wildcard application map is necessary to forward a request for any type of file (for example, .pdf, .doc, .docx) to ASP.NET, which then forwards the request to Windows SharePoint Services for processing.

Another relevant issue of the new architecture has to do with how .aspx pages are parsed and compiled. The .aspx page parser used by ASP.NET 1.1 works only with .aspx pages that reside on the local file system. However, Windows SharePoint Services architecture relies on storing .aspx pages inside a SQL Server database. Since Windows SharePoint Services 2.0 relies on ASP.NET 1.1, the Windows SharePoint Services team had to create their own .aspx page parser. Unfortunately, the .aspx parser of Windows SharePoint Services 2.0 does not support many of the richer features offered by the ASP.NET .aspx page parser.

ASP.NET 2.0 introduced a new pluggable component type known as a virtual path provider. A developer can write a custom component that retrieves .aspx pages for any location including a database such as SQL Server. Once a custom virtual path provider retrieves an .aspx page, it can then hand it off to ASP.NET to conduct the required parsing and compilation. ASP.NET also gives the virtual path provider a good deal of control as to how .aspx pages are parsed and whether they are compiled or run in a non-compile mode.

The Windows SharePoint Services 3.0 team has created their own virtual path provider named SPVirtualPathProvider, which is shown in Figure 1-1 on the next page. As you can see, the SPVirtualPathProvider is able to retrieve .aspx pages from SQL Server and then hand them off to the .aspx page parser supplied by ASP.NET 2.0. That means the Windows SharePoint Services 3.0 team was not required to evolve their .aspx page parser from the previous version. It also means that Windows SharePoint Services 3.0 does not suffer from a reduced feature set with respect to page parsing as it does with Windows SharePoint Services 2.0.

If you’re familiar with the architecture of Windows SharePoint Services 2.0, you’ve probably heard the terms “ghosting” and “unghosting” used in conjunction with the .aspx pages of a Windows SharePoint Services 2.0 site. Page ghosting is a Windows SharePoint Services feature that allows a front-end Web server to store an .aspx page template on its local file system and to share that page template across many different sites. Page ghosting offers performance benefits because Windows SharePoint Services can serve up pages for thousands of sites using a page template stored on the local file system and loaded into memory a single time.Windows SharePoint Services 2.0 supports user modifications to the page template using tools such as Microsoft Office FrontPage 2003. Once a user modifies a page template and saves the changes, a customized version of the page is stored on SQL Server for that particular site. In Windows SharePoint Services 2.0, this is often referred to as unghosting a page.

Windows SharePoint Services 3.0 still supports page templates that live on the Web server as well as customized versions of those page templates that are stored on SQL Server. However, the Windows SharePoint Services team and their documentation have stopped using the terms ghosting and unghosting because they do not translate well into other spoken languages. In Windows SharePoint Services 3.0, the term “uncustomized page” refers to a page template used directly from the local file system of the Web server, and the term “customized page” refers to a modified version of the page template that has been written to the content database for a particular site.

Another change to be aware of is that Microsoft Office FrontPage 2003 has been renamed in its new release as Microsoft Office SharePoint Designer 2007. Like the previous versions of FrontPage, Office SharePoint Designer 2007 is targeted more toward users than developers. However, it’s nonetheless a handy tool to have in your bag of tricks as a Windows SharePoint Services developer.

Office SharePoint Designer 2007 provides a code editor and WYSIWYG designer for customizing .aspx pages within Windows SharePoint Services 3.0 sites. You can also create new pages within a Windows SharePoint Services site that have no corresponding page template on the Web server. Office SharePoint Designer 2007 supports creating lists and document libraries and even supplies a new wizard for creating custom workflows on a Windows Share- Point Services site. Workflows in Windows SharePoint Services 3.0 will be discussed in more depth later in this chapter.

Sep 1, 2007

Object DataSource Update

Assume you create a grid that shows a list of EmployeeDetails objects. You also add a column with edit links. When the user commits an edit, the GridView fills the ObjectDataSource.UpdateParameters collection with one parameter for each property of the EmployeeDetails class, including EmployeeID, FirstName, LastName, and TitleOfCourtesy. Then, the ObjectDataSource searches for a method named UpdateEmployee() in the EmployeeDB class. This method must have the same parameters, with the same names. That means this method is a match:
            public void UpdateEmployee(int employeeID, string firstName, string lastName,
            string titleOfCourtesy)
            { ... }

This method is not a match, because the names don’t match exactly:
            public void UpdateEmployee(int id, string first, string last,
            string titleOfCourtesy)
            { ... }
This is not a match, because there’s an additional parameter:
            public void UpdateEmployee(int employeeID, string firstName, string lastName,
            string titleOfCourtesy, bool useOptimisticConcurrency)
            { ... }
The method matching algorithm is not case-sensitive, and it doesn’t consider the order or data type of the parameters. It simply tries to find a method with the right number of parameters and the same names. As long as that method is present, the update can be committed automatically, without any custom code.
Sometimes you may run into a problem in which the property names of your data class don’t exactly match the parameter names of your update method. If all you need is a simple renaming job, you need to perform the task that was described in the “Updating with Stored Procedures” section earlier, although the syntax is slightly different. First, you define the additional parameters you need, with the correct names. For example, maybe you need to rename the EmployeeDetails.EmployeeID property to a parameter named id in the update method. Here’s the new parameter you need:

Second, you react to the ObjectDataSource.Updating event, setting the value for these parameters and removing the ones you don’t want:
            protected void sourceEmployees_Updating(object sender,
            ObjectDataSourceMethodEventArgs e)
            e.InputParameters["id"] = e.InputParameters["EmployeeID"];
Handling Identity Values in an Insert So far, all the examples you’ve seen have used parameters to supply values to an update operation. However, you can also create a parameter to return a result. With the SqlDataSource, you can use this option to get access to an output parameter. With the ObjectDataSource, you can use this technique to capture the return value. To see this in action, it’s worth considering the InsertEmployee() method, which adds an employee record and returns the newly generated unique ID value as an integer: public int InsertEmployee(EmployeeDetails emp) { ... } You don’t need to use the identity value. As you’ve seen already, linked data controls are bound after any updates are committed, which ensures that the updated information always appears in the linked controls. However, you might want to use the identity for another purpose, such as displaying a confirmation message. To capture this identity value, you need to define a parameter:
Now you can retrieve the parameter by responding to the Inserted event, which fires after the insert operation is finished: protected void sourceEmployees_Inserted(object sender, ObjectDataSourceStatusEventArgs e) { if (e.Exception == null) { lblConfirmation.Text = "Inserted record " + e.ReturnValue.ToString(); } }

Parameterized Constructor of ObjectDataSource

Some time if the ObjectDataSource need a parameter instructor you can supply it in the ObjectCreating event.
            protected void sourceEmployees_ObjectCreating(object sender,
            ObjectDataSourceEventArgs e)
            e.ObjectInstance = new DatabaseComponent.EmployeeDB("...");

Databinding life cycle

  1. The page object is created (based on the .aspx file).
  2. The page life cycle begins, and the Page.Init and Page.Load events fire.
  3. All other control events fire.
  4. The data source controls perform any updates. If a row is being updated, the Updating and Updated events fire. If a row is being inserted, the Inserting and Inserted events fire. If a row is being deleted, the Deleting and Deleted events fire.
  5. The Page.PreRender event fires.
  6. The data source controls perform any queries and insert the retrieved data in the linked controls. The Selecting and Selected events fire at this point.
  7. The page is rendered and disposed.

Custom Sorting Paging for GridView

ObjectDataSource can do sorting paging without writing code in the aspx. The idea of using object data source is to move the code out of page.
public class MyCustomer
   public static int  SelectCount()
     return 20;

   public static DataTable SelectData()
     return SelectData(null);

   public static DataTable SelectData(string sortExpression)
     return SelectData(0, 0);

   public static DataTable SelectData(int maximumRows, int startRowIndex)
      return SelectData(null, maximumRows, startRowIndex);

   public static DataTable SelectData(string sortExpression, int maximumRows, int startRowIndex)
     DataTable dt = new DataTable();
     dt.Columns.Add("id", typeof(int));
     dt.Columns.Add("name", typeof(string));
     int upbound = SelectCount();
     if (maximumRows != 0)
       if (upbound > maximumRows)
         upbound = maximumRows;

     for (int i = 0; i < upbound; i++)
        int current = i + startRowIndex + 1;
        dt.Rows.Add(new object[] { current, current.ToString() });
     return dt;
Because GridView does not have VirtualItemCount(DataGrid has), manually binding to a datasource is very difficult. This aritcal provide a solution the implement the VirtualItemCount.

Reading Binary Data Efficiently

protected void Page_Load(object sender, System.EventArgs e)
  string connectionString =
  SqlConnection con = new SqlConnection(connectionString);
  string SQL = "SELECT logo FROM pub_info WHERE pub_id='1389'";
  SqlCommand cmd = new SqlCommand(SQL, con);
    SqlDataReader r =
    if (r.Read())
      int bufferSize = 100; // Size of the buffer.
      byte[] bytes = new byte[bufferSize]; // The buffer of data.
      long bytesRead; // The number of bytes read.
      long readFrom = 0; // The starting index
      // Read the field 100 bytes at a time.
            bytesRead = r.GetBytes(0, readFrom, bytes, 0, bufferSize);
            readFrom += bufferSize;
       } while (bytesRead == bufferSize);

GridView RowCommand event

In the RowDeleting, SelectedIndexChanging, RowUpdating, RowEditing, you can retrieve row index like e.RowIndex, or e.NewSelectedIndex, e.NewEditIndex. But GridVeiwCommandEventArgs has not index attribute. So how can you get the row index in RowCommand event? It turns out that GridViewComandEventArgs has an additional e.CommandArgument, in the events above it will return the row index. Here is senario, I want to have a button, to show related info, I can want to have command name as showInfo. But in the RowCommand event, I can not know the row index.? My solution is don't use command name as "showInfo", use "Select", this will trigger the SelectedIndexChanging event, in the event you can have e.NewSelectedIndex. But you should also you have two action button, one is showUserInfo, another is showOtherInfo, if both use select command, how to differenciate them? I will do like string commandArgs; protected void gv_RowCommand(object sender, GridViewCommandEventArgs e) { commandName = e.CommandArgument; } protected void gv_SelectedIndexChanging(object sender, GridViewSelectEventArgs e) { int rowIndex = e.NewSelectedIndex; if (commandArgs == "showUserInfo") { //.. } else if (commandArgs == "showOtherInfo") { //.. } } This works because RowCommand happen before Select changing event.