News & Posts

How to Git

posted Aug 13, 2021, 10:56 AM by Ashar Khan

Initializing your project Repo and Commit/Push to Remote Origin 

When you create a project in Azure DevOps, there are multiple options to start using Git on your project. These options are mentioned below: 

Push an existing repository 

This is the best option to use when setting up a project from scratch. This option requires you to create a local new Git repository and create a project then commit and push it to the remote origin. 


Follow the steps below to do all of the above: 

  1. Create a project directory using Command Prompt 
    md <project_name> 
    cd <project_name> 

  1. Initialize local Git Repo: 
    git init 

  1. Open Visual Studio or Code and create a project using a suitable template in the directory. If you are using donet CLI or SPFx (generator-sharepoint) then create a project using dotnet or SPFx commands in the directory. 

  1. The template will scaffold the project files. Once ready, use the following command to add project files to local repo. 
    git add . 

  1. Check the status: 
    git status 

  1. Commit the files to local repo: 
    git commit –m “added project files for the first time” 

  1. Add remote origin in Azure DevOps repo URL where you have created the project: 
    git remote add origin<project_name>/_git/SPFxRushDemo 

  1. Push the code from local repo to master branch in remote repository. 
    git push --set-upstream origin master 

Creating branches on the Remote/Origin repo using Azure DevOps 

How to create a branch for work items in remote repo 

  1. Select the backlog item in the current sprint and Create a branch. 

  1. Select as many related items and name the branch. This will create a remote branch. 

  1. Go to local prompt and do git pull. This will bring the newly created remote branch 

  1. Make sure to set the current head to newly created local branch by: 

  1. git branch (to list all branches) 

  1. git checkout <new-branch-name> 

  1. Work on the code and make changes to files which needs updating.  

  1. Once done updating and compiling successfully with tests success, go to prompt and run: 

  1. git status 

  1. git add . 

  1. git commit –m <message> 

  1. git push 

  1. This will update the sync the remote branch 

  1. Go to Azure DevOps and create a pull request 

  1. Once the pull request is reviewed and approve, complete the merge with deleting the branch. 

  1. This will update everything back to the master branch. 

How to pull the changes back to local repo 

  1. Pull the changes from remote to local by running: 

  1. git branch master 

  1. git pull 

  1. The master branch will have latest updates 

How to remove heads for all remote branches on local repo 

  1. After the git pull and having latest changes to the local branches, you can prune the remote branches in local repo by: 

  1. git remote prune origin 

  1. The above command will destroy local copies of remote/origin branches. 

NOTE: You can also using the command below if you have not deleted the remote branch in Azure DevOps: 

git branch –rd <branch-name> 

git pull 

This will remove remote and local branches 

  1. git fetch origin 

How to remove local branches from local repo when they are not needed 

  1. Get the local versions of branches in local repo: 

  1. git branch –vv (this will give you all local branches) 

  1. Delete the local branch 

  1. git branch –d <branch-name> 


SharePoint Projects Intake Process

posted May 2, 2019, 10:56 AM by Ashar Khan

One of the major tasks that I had to attend to when I started my new role as SharePoint Architect at my current client is to establish a dev ops for SharePoint OnPrem and SharePoint Online. Started with Azure Dev Ops I quickly recognize that the problem is bigger than that. There is no intake process for new projects either its configuration or actual development or just a client side scripts. It's ad-hoc where the BA's meet with the clients and the analysis is done quickly and dive into solution-ing without checking into the scope, release planning and actually creating SDLC artifacts that are important to track progress as well as document what we are doing.

So, the scope of devops dramatically changed. At this point, I decided to utilize Microsoft Teams to manage the workload starting from inception of an idea to completion. I engaged with my colleague and created a process which feeds on a dev/config opportunity and guide with step by step configuration of Team site to manage requirements, release planning, sprint progress and devOps for final product.

  1. Microsoft Teams Configuration

  2. The process below defines all the steps needed to start from requirements registered in a Tracker system which we use to track for all requests. For all practical purposes, this can be an email or a list of project registry.

  3. 1. A work item/request comes in via WIP/Tracker 

  1. 2. If a channel is needed in Tier1 one can be created (request/provision) 

  • Some WIP/Tracker items belong in the existing Fav channels  
    (e.g. CU Updates would go in the System Administration channel) 


  1. 3. Non-project related work efforts will be coordinated on in a channel in Tier2 – Solutioning’ 


  • In ‘Tier2 – Solutioning’, a new channel will be provisioned in the relevant ‘SharePoint Project MSTeam’. Each department will have a dedicated ‘SharePoint Projects - MSTeam’ 

  • If a project warrants provisioning its own SharePoint Projects - MSTeam’ we can do so ad hoc 


  • All of the ‘SharePoint Projects - MSTeams’ will be linked and presented via the ‘SharePoint Projects’ hub shared navigation 


  • We’ll use one Planner within that channel as the Storyboard to include: 

  1. a) ‘Todo’ bucket: 

  • Placeholder for the scheduling of a ‘Solution Review’ meeting (we’ll make it the first task by default) 

  • Requirements (requestor, functionality, etc.) 

  • Use cases 

  • User stories 

  1. b) ‘UAT Cases’ bucket: 

  • UAT case definitions 


  1. 4. Prior to project-kickoff, we will conduct a one-time ‘Solution Review’ with the “Solutioner prior to the start of the project kickoff (i.e. start of the first sprint). 

  • Design Reviews’ consists of: 

  1. a) Solution designs 

  1. b) Work estimates 

  1. c) Defined sprints (in a Planner) and sprint durations 

  • The end deliverable of the Solution Review is a defined ‘Release Plan’ (i.e. a Planner) 

  • After the Release Plan has been defined, we will go back to the business and review for affirmation 

  • AzureDevOps requirements will be identified within the Release Plan and tracked across all the sprints 
       Note… we’ll use a separate and reusable ‘Planner’ for the sprints 


  • A wiki will be used for documenting the project 
       Note… store files in the files tab and link to them in the wiki 


  • Before adding any back logged items to the Release Plan we will consult with the business (e.g. during the next sprint retrospective/planningto determine disruption and how to accommodate without unending sprint creep 


  1. 5. Escalation into ‘Tier3 – AzureDevOps’ can occur at any point in the project lifecycle and overlap with sprint(s) defined in the Release Plan 
       Note... At the highest level AzureDevOps is organized by department and then sub-categorized by project. 


  1. a) An AzureDevOps project is created 

  1. b) Devs will be added as participating members on the AzureDevOps project 

  1. c) A separate AzureDevOps specific Release Plan will be worked on in AzureDevOps 

Snapshot of Team configuration:

Release Plan:

Snapshot of Release Plan


This is a running activity until the deadline of the project is met. This is the view where you copy tasks from the sprint you plan to start in the release plan, in To Do bucket so you can start working on them by creating tasks and moving the stories along as they progress until they are done.

This view is always current to show the current sprint. If there are any items left to finish from last sprint, they will show up here in either In Progress or in QA buckets.

Caching problem with IE

posted Apr 5, 2017, 8:31 AM by Unknown user

My customer has been complaining about data not being saved in the Angular JS app over and over. I have tried troubleshooting in Chrome and everything appeared to be working perfectly fine until I started browsing my Angular app in Internet Explorer. I debugged through the javascript and since I am using Angular Resource, there is certain abstraction that I have to live with. However, the request has been submitted correctly on the server and the update is performed successfully in the database. The response resource is collected in the success promise but the screen only updates when I have IE toolbar open. At that point I started googling and found its the issue with IE cache.

The following code in the config fixed my cache problem:$httpProvider) {
    $httpProvider.defaults.cache = false;
    if (!$httpProvider.defaults.headers.get) {
        $httpProvider.defaults.headers.get = {};
    // disable IE ajax request caching
    $httpProvider.defaults.headers.get['If-Modified-Since'] = '0';

The downside of this of course is No Caching, if the data is not rapidly changing on the site, its always pulling and fetching data from the database. I wonder if there is a means to provide the configuration for specific areas of the site pages. I believe that can be done writing your custom interceptor which will essentially append a unique value to the request url so that IE wouldn't recognize the URL and always pull from server.

Managed Metadata under the hood

posted May 25, 2012, 6:53 PM by Unknown user   [ updated May 25, 2012, 6:53 PM ]

Metadata repository at a logical and physical level 
Physical Level:
Managed Metadata Services Application (MMS); when managed metadata is enabled in our SPS2010 Central Administration Services, a managed metadata service and connection are created automatically. The service identifies the database to be used as the term store, and the connection provides access to the service, so that this service can be consumed by our site collections. 
Logical Implementation:
Managed Metadata Terms (MMT) is a hierarchical collection of centrally managed terms that we can define, and then use in our SPS 2010, for items such as Pages, Lists and Libraries. When we create new managed terms these are stored in the database that is specified in the MMS when we publish a managed metadata service, a URL to the service is created, These URL are then used by our site collections to consume these services. 
Managed Metadata Connections (MMC): To consume managed metadata, a web application such as our Authoring environments in all of our farms must have a connection to a MMS. A Web application can have connections to multiple services, and the services can be local to the Web application or remote to another farm as long as the farm can talk to each other. When a managed metadata service is first created, a connection to the service is created automatically in the same Web application as the service. 
A Term
A term is a word or phrase that can be associated with an item in SharePointServer2010 
Global MMT VS Local MMT 
  Global Local
 Managed Via Central Administration Local Site Collection
 Managed ByUsers with the rights to the MMS in the server Central Administration Site collection administrator and site owners

Available To

All site collections in a web application associated with a given Metadata Service ProxyThe site collection where they were created
The development lifecycle of a metadata repository 
OOTB SPS 2010 offers a centralised UI so that Term Sets can be easily and logical created, edited, deleted and managed:
Term Sets can also be imported using a spread sheet. Term Sets can also be programmatically created, edited, deleted however it is a bit more difficult and error prone to go this way. 
IMPORTANT: We cannot use our regular deployment lifecycle to deploy new metadata using Visual Studio solutions. 
As I have explained in the first question, we can consume a MMT as long as the following prerequisites are met:  
  • Must have a valid URL of the service
  • If this will be a cross-farm connection, the farm on which the service runs and the farm on which the connection runs must have a trust relationship.
  • The service must have granted permission to the application pool account of the Web application in which the connection is created.
View the nature of specific secure farm architecture which uses SQL Authentication between our Authoring and Publishing I can foresee issues such as content migration from Authoring to Publishing breaking, search not be able to properly index and more.
The idiosyncrasies of a SharePoint 2010 metadata repository
I have listed some of the know issues below:
  • No SharePoint Workspace support (If a list or library is offline and contains required taxonomy fields, it will be inaccessible).
  • No support for bulk edit (There may be third party tools to do that)
  • No built-in support based on synonyms
  • Limited Support in List Views, so data filtering issues:
  • Cannot be used in calculated columns:
  • Ampersands Issues
Ampersands are stored as full width ampersands within the MMS database. Working with terms in PowerShell and custom solutions (e.g. exporting terms to CSV) becomes a must, thus preventing Communication access, and writing CAML Query becomes a problem.
PowerShell cmdlets do not contain the same features available in the Taxonomy API
The Get-SPTaxonomySession always retrieves a cached version of the TermStore 

Document & Records Management in SharePoint 2007 and 2010

posted Jul 12, 2011, 5:41 AM by Unknown user   [ updated Jul 12, 2011, 6:09 AM ]

I was inquired about the difference between document management and reocrds management features in SharePoint 2007 and 2010 so I decided to write a blurb about it. Enterprises think about enhancing the records management capabilities and often get flummoxed with the OTF products like TRIM and customized EDRMS solution built on SharePoint platform. This is not about recommending one or the other but pointing out the capabilities built into SharePoint.
Document Management feature of SharePoint Server for enterprise provides the capability of creating workspaces and repositories to securely store documents along with the meta-data information. Depending on the enterprises' documents storing needs and the information architecture, a solution that compliments the enterprise infrastructure must be designed with the following key components;

Content Types, Team sites, Document libraries, Workflows & Document workspaces

The state of the art SharePoint enterprise search capability provides the means to crawl and index the content within the documents along with the meta-data to be easily searchable by the users.

SharePoint document libraries offer highly customizable framework to storing documents and can be configured to manage document versioning history, content approval and workflows.

Records Management feature of SharePoint extends documents management capability and sets the focus on identifying information stored in document libraries and workspaces as records. SharePoint enables organizations looking to manage and archive their enterprise records through a document management system (eDRMS) by allowing them to store information in Records Center sites. These Record Centers are set to execute business rules that adhere to the records management policies in the organization by maintaining the electronic filing process, labelling, history, audit trails, routing and disposition of these documents. The routing & disposition workflows provide a framework to create custom retention policies.

Records Management has come a long way since the initial launch of SharePoint in 2003. There was a very limited functionality in terms of identifying the records and applying retention policies to them. It is only after MOSS 2007 when the Records Center was introduced with routing & disposition workflows and built-in audit functionality. SharePoint keeps track of all the events and changes that occur within the system which enables managers and administrators to report on activities performed by a business user in terms of how they interact with the system. However having to design the RDMS system involved customizations of the SharePoint objects for implicit Records Management capabilities for the end users.

Today Enterprises need fast, user-friendly, secure & reliable records management system which coheres to the organizational policies of record keeping and auditing. To name a few, DCAA, ISO, and CMMI are some of the standards which are being used as baseline by these enterprises and SharePoint 2010 assures to comply with them by introducing the following capabilities which will help these enterprises on a massive scale to achieve their goals.

The document & records management capabilities are greatly enhanced in the latest version of SharePoint (SP2010) which offers many useful features OOTB including:

Corporate taxonomy & term stores; enables enterprises to manage groups of hierarchical corporate terms

Meta-data publishing / content hub; provides means to manage global document templates and content types that can be shared across departments & functional areas. This helps implement business classifications and filing policies for record keeping

Content organizer; exhibits the ability to manage routing/retention/disposition workflows across document libraries and workspaces per business classification

Document stores; renders conceptual boundaries around documents across business areas

Meta-data navigation; simplifies navigation by enabling users to browse information based on business classification, location and custom meta-data.

The above capabilities provide extremely efficient way to manage records and enable enterprises to design & implement a highly effective and customized eDRMS.



Workflow Task Manager Activity for SharePoint

posted Sep 30, 2010, 11:46 PM by Unknown user   [ updated Sep 30, 2010, 11:57 PM ]

Sometime ago I came across the issue of handling multiple tasks in parallel and manage independentally. Turned out WWF provides a friendly replicator activity. So I decided to use it to develop a workflow that upon kick off reads items from other lists for assignment, business areas and notification lists and spwan multiple tasks dynamically in parallel. The workflow is set to be completed when all the tasks are actioned by their assignees without waiting one task to be finished before the next task is created (Parallel tasks).
How it works
Thankfully WF comes up with a looping activity to run a specific child activity multiple times and is called Replicator activity. It has its own challenges and limitations to work within for instance, you can only add one activity as a child of a Replicator activity.

The first and foremost thing one has to get their head around is to understand what happens behind the scenes when the replicator is set to run in parallel mode. And not the least, it is important to understand the crucks about workflow execution specially for SharePoint world. Now there is heaps of documentation on MSDN on it so I wouldn't go into details but I would mention a few which I had to face when using Replicator.
  • One activity per replicator
  • Custom activity is a must if you want to run it in parallel.
  • Bubble up all dependency properties and events from your custom activity.
For further details and download hop on to codeplex project here.

Welcome to my site

posted Dec 10, 2009, 1:01 AM by Unknown user   [ updated Dec 10, 2009, 1:06 AM ]

Welcome to my new site. Here you will find information about me and the type of work I am involved in, what are my expertise etc etc.

1-7 of 7