Quantcast
Channel: SharePoint Escalation Services Team Blog
Viewing all 170 articles
Browse latest View live

Partial Results feature for Search farms in SP2013

$
0
0

With index capacity now increasing up to 250 million index items spanning 64 components in a search farm, many customers are looking for partial results feature. Essentially customers are looking for an ability to mitigate “full search outage” situation in large scale SharePoint search farm when some of search components are still working fine. This feature would allow customers to enable their end users to receive a partial result set even if not all replicas have responded to a query.  This is also very helpful in significantly reducing the probability of search outages when customers deploy large number of partitions (we allow up to 25 partitions). This feature would also allow customers to provide some guaranteed minimum SLA to its end users for mission critical search portals in SharePoint. Below are some specifics

WHAT IS CHANGED?

We introduced a new search setting on SSA level (AllowPartialResults) as one central switch to turn on/off (default) the partial results feature on farm level. After enabling this feature, if one of the search replicas is degraded, end users will still receive results from available search components. All relevancy and refinement results are returned from those available / responding index partitions (cells). Each and every returned result will have complete summary info for consistent UI rendering. When you enable this feature, a new metadata property is added in ResultTableCollection (PartialResults = true) only when partial results are returned in the backend. CSOM / REST API would also pick up this new property automatically after Server OM has been changed.

HOW TO ENABLE FEATURE?

This feature can be enabled through below steps

# Make sure you are on NOV 2014 CU

# Once ensuring above, to enable this feature, please do the following:

$ssa=Get-SPEnterpriseSearchServiceApplication

$ssa.AllowPartialResults = $true

$ssa.Update()

Please note that we won’t support enabling this feature on site collection / site level - once turned on in SSA, this feature will affect all search sites.

 

POST BY : Srinivas Dutta


Requirements for sending email to SharePoint libraries_including X-Headers

$
0
0

There have been some questions around E-Mail requirements for sending email to a SharePoint list or library, and specifically around the need for the x-sender and x-receiver header fields.

This is not intended to be a comprehensive list of all requirements, but rather an overview of the way we use x-headers and other unique or notable requirements for sending mail to SharePoint lists and libraries.  More information about requirements not specifically cited here can be gleaned from the referenced RFC 821 and RFC 822 standards.

We process messages sent to SharePoint lists via email, in three phases.

1. First is the SharePoint Email Engine, which consists of the functionality associated with the SharePoint timer job which fetches the email messages from the Windows SMTP service via the drop folder, and then converts each message into a stream for delivery to the appropriate list in the appropriate SPWeb. This functionality exists in private classes, so there is not a great deal of available information on the internals of this step.

2. The second phase is handled in the SPEmailMessage class.  I have linked to the public MSDN information about this step. This phase consists of parsing of the message extracting the header, body and attachment components, and converting these to metadata and attachments to be added to the target list/library is done here.

3. A third phase handles the details of mapping the appropriate metadata and attachments to the library-type specific fields.

X-Headers are typically used in processing SMTP mail from the drop folder. While phase #2 and #3 do not require the X-Headers, phase #1 does require them, and uses them for routing the message.

Since the first phase is not publicly documented on MSDN I have attempted to sum up the requirements for this phase below, with additional, relevant information from phases 2 and 3.

1. Required fields include:

==============================

Sender Header = "x-sender: "

Receiver Header = "x-receiver: "

Mail file Pattern = "*.eml"

Message ID = "message-id"

If there are no Recipients listed in the “x-receiver:” header field, no sender listed in “x-sender:” , or the x-sender header field is not present, we will fail processing the message and return an error with this verbiage in the ULS logs:

-----------------------------------------------------------------------------------------------------------------------------------------

A critical error occurred while processing the incoming e-mail file <email file name> The error was:

Bad senders or recipients

---------------------------------------------------------------------------------------------------------

Additionally, if the message header contains "X-AlertServerType" with a value = "STS" or "X-mailer" with a value = "Microsoft SharePoint Foundation 2010" the message is considered to be originated from SharePoint and will not be delivered to a SharePoint list.

The behavior seen in this first phase is similar to the Exchange replay folder behavior noted here:

https://technet.microsoft.com/en-us/library/bb124230.aspx

The X-Header fields, including x-sender and x-receiver, used by SharePoint are described in more detail here:

https://msdn.microsoft.com/en-us/library/ms526966(v=EXCHG.10).aspx

excerpt:

Using the SMTP Message Envelope Fields

A summary of the use of x-sender and x-receiver from this article is included below:

===========================================================================

The recipientlist field contains the e-mail address of each intended recipient. If the message was submitted over the network through the SMTP protocol, each recipient corresponds to a RCPT TO: protocol command. If the message was submitted through the local SMTP service pickup directory, each recipient corresponds to an x-receiver header at the top of the message file. You can alter this list when executing; for example, you can expand a custom distribution list by substituting the alias address with each member of the list.

The senderemailaddress field contains the address of the message's sender. If the message was submitted over the network using the SMTP, this address corresponds to the transmitted MAIL FROM SMTP command. If the message was submitted using the SMTP service pickup directory, this address corresponds to the x-senderheader at the top of the message file.

Note

The recipientlist and senderemailaddress envelope fields and the various message header fields such as (urn:schemas:mailheader:) To, From, Cc, and Bcc are different attributes of the message. The SMTP does not use the mail header fields to route the message; it routes the message based upon the RCPT TO and MAIL FROM protocol commands if a message is submitted using the SMTP, or the x-receiver and x-sender headers if a message is submitted to the local SMTP pickup directory. The envelope fields are not transmitted or stored with the message and exist only for the message's lifetime in SMTP transport.

===========================================================================

In addition to x-sender and x-receiver, we make use of and populate the messagestatus and arrivaltime x-header fields when picking up the message from the drop folder and delivering it to the appropriate list.  These fields help to manage delivery failures, aborts, retries, etc.

 

2. In the second phase we follow RFC 821 for attributes like “MAIL FROM” (EnvelopeSender), or if not present, we fall back to RFC 822 “FROM” header if MAIL FROM is blank or not present.

The same behavior applies to the RFC 821 “RCPT TO” and RFC 822 “TO” header fields, among others.

In this phase - SPEmailMessage - we use RFC 821 when parsing other message header, body, and attachment components.

The MIME parser used is a specific - and modified to suit SharePoint coding design - version of the Exchange EDGE server role MIME parser.

Of note, a 4K header size limit is imposed by our implementation of the MIME parser.

 

3. Third phase (SP Email Handler) and additional information:

====================

Attachments - Invalid filename characters = "\\/:*?\"<>|#{}%~&"

SharePoint List types supported for email

NOTE: SharePoint has coded email handlers for the following SPListTemplateType types.

· Announcements

· Events

· DocumentLibrary

· PictureLibrary

· XMLForm

· DiscussionBoard

· Posts

 

POST By:Mike Demster [MSFT]

SharePoint 2013: Active Directory Import and known behaviors

$
0
0

I had a chance to work with a customer for an Active Directory import problem where it was found that disabled users are not deleted in UPA automatically. I have blogged the same here. Digging a little deeper, discovered other behaviors seen with Active Directory import method. I have tried to document a few of them in this Blog Post, Will add more as observed …

What you need to know before choosing the Active Directory import option to sync users in SharePoint 2013. You may expect Active Directory import method will act similar way as FIM expect that you can export to AD, which is not the only difference and others are…

  • Disabled user accounts in Active Directory are not automatically deleted or marked for deletion in User Profile Service Application (bdeleted = 1)

 

  • It imports non user objects as well, like computer accounts.

 

  • If you have an OU which has both Computer & Users objects, then both are imported in UPA. However this is not the case with FIM based synchronization

 

  • If you select only few users under an OU, then import process does not bring in those users to UPA. It only imports all users in an OU & whole OU has to be selected .

 

  • If the user object has value for a property “LastKnownParent” and that points to an OU which is not being imported, then that profile will be ignored during import process.

Consider the scenario

# UPA has following OU’s imported

    Root

            OU1 (is selected in the import connection)

                      User1 (has “LastKnownParent” pointing to OU2.

            OU2 (Not selected in the AD connection)

                      User1 will be ignored.

“LastKnownParent” attribute is generally filled when a user is deleted from AD and moved to recycle bin. This property will help to track where this user was earlier present. However if a delete a user from OU2 and restore it to OU1 then lastknowparent will have a value pointing to OU2.

  • ·AD Import does not delete the disabled accounts automatically in UPA. So, it was suggested to use the command

         o Set-SPProfileserviceApplication –PurgeNonImportedObjects $true

         o You need to understand the impact of above command.

              #How are the profiles created in UPA?,

       Through import process

        Manually creating the profiles or through object model

       When a user hits the mysite host, then the profile is automatically created.

when you use PurgeNonImportedObjects command, it is going to delete all the objects that are NOT coming through import Which Includes

  • Not  imported due to change in OU selection , Disabled Account , Filtered Etc
  • Manually Created
  • Automatically Created by Browsing to mysite host.

So, when your AD is not in proper structure (like an OU has both user and other objects), or you import process needs complex filtering, then it is recommended to user FIM based synchronization.

 

POST by :Satheesh Palanisamy [MSFT]

SharePoint 2010 - Unable to create Sites or upload images after installing kb3013126 (MS14-085)

$
0
0

 

 

Symptoms

We have seen a number of SharePoint issues caused by a recent Security Update KB 3013126 (MS14-085), including the following:

1. Unable to create Sites (Even with OOB templates except "Blank Site" template).
2. Image File Upload to picture libraries fails.
3. Content deployment hangs.
4. AspMenu does not function correctly (menu drop down does not work).
5. W3WP crashes with a stack overflow (it can hang too).

Cause

The issue appears to have been caused by an update to windowscodecs.dll (By Security Update KB 3013126).

Resolution

The current workaround is to either uninstall this KB 3013126 or update IE on the server to IE9 or IE10 (happens only if the server has IE8) or install KB971512.

It seems that there are some more prerequisites that are missing from the security patch reference which gets covered by installing IE9\IE10 or installing KB971512.

More Information

We have now two security patches that are updating WindowsCodecs.dll and they are http://support.microsoft.com/kb/3013126 and http://support.microsoft.com/kb/3029944 (this replaces KB 3013126).

This Update applies to Windows Codec, which is part of Windows OS, SharePoint is one of the affected applications.

 

POST BY : ANOOP PRASAD [MSFT]

Configuring Search Verticals (Video , Conversations, People ) for Hybrid Search Experiences In SharePoint 2013 and SharePoint Online-Part6

$
0
0

When a user browses to an Enterprise Search center they see four out of box search verticals. A search vertical is defined to filter search results so that only a certain subset of all relevant results is displayed. SharePoint Server 2013 provides four preconfigured search verticals: Everything, People, Conversations, and Videos. In my previous post we have spoken about how to include OneDrive for Business as a search vertical. Below are the four default search verticals in an Enterprise Search Center along with OneDrive which can be configured as a search vertical.

image

 

As with the One Drive experience, to include additional search verticals we need to setup appropriate Result Sources and Query Rules to match the desired search results.

When we configure Hybrid search our primary focus has always been on the ‘Everything’ search vertical. This post is going to focus on configuring Hybrid search results for rest of the verticals

People as a Hybrid Search Vertical.

Conversations as a Hybrid Search Vertical.

Videos as a Hybrid Search Vertical.

We already spoke about how you can configure OneDrive as a search vertical. These verticals can be configured to return Hybrid search results both on SharePoint On-Premise as well as SharePoint Online, provided both outbound and inbound Hybrid infrastructure has been configured.

Configure People as a Hybrid Search Vertical

To configure People vertical to return Hybrid search results we need to primarily modify the Result Source and Query rules. This configuration change can happen at a Global level i.e Central administration level, site collection level or Web level. For this post we will concentrate on Global configuration. Browse to the search service application and under search administration click on Result Sources.

image

Click on New Result Source and fill in the below information.

General Information: Give a name to the result Source example SPO People.

Protocol: Choose Remote SharePoint.

Remote Service URL: Type in the root site collectionUrl for your SharePoint On-premise site collection.

Type: Choose People Search Results.

Query Transformation: Leave defaults

Credential Information: If you are configuring on your On-premise environment i.e you want Outbound

Hybrid then leave this as Default Authentication. In case you are configuring in SharePoint Online then choose SSO ID. Within the Reverse Proxy Certificate Secure Store ID type in the name of the same secure store as you created in the inbound Hybrid configuration specified here. Click on save to save the source.

If you edit the result source, the settings should look like the print screen below.

image

 

Once the result source is created, you can always test the result source and smile if you see Success identical to the printscreen below. This means you have been able to successfully configure the result source to query for people information from SharePoint Online.

Note: When testing a result source in this way you must be logged on with an account that has a user profile on the remote farm and permission to access the url defined as the remote url.

image

 

Next step would be to configure the Query rule for SPO People result source that we just created. Browse to Search Service application from Central Administration. Under Query and Results section click on Query Rules. Within the Query rules “From what context do you want to configure rules “drop down choose the result source that you created above example SPO People

image

Click on New Query Rule and fill in the below information.

General Information: Give a name to the Query rule example SPO People QR

Expand Context and click on Add Source. From Add Source drop down choosethe result source“Local People Results.

Remove any Query condition, since for this post we will just focus on out of box results.

Under Actions and within the Results Block section click on Add.

Leave Block Title defaults.

Within Query, under Search This Source choose the result source that we created above in this post and named as SPO People.

Expand Settings and click on the radio button that states“This block always shown above core results”. This is to ensure that the results are definitely shown, since if you choose ranked the results may not show up due to others being ranked higher than the ones from SPO.

Choose defaults for Routing. Click on Ok.

Under Publishing choose Is Active check box is selected unless you explicitly want to have a future publishing date. Click save. If Query rule that I just created is edited you should see the settings match the picture below.

image

 

This post assumes that you have already crawled the On premise UPA to get results from SharePoint Onpremise environment. This completes the setup of Hybrid results in the enterprise search center People Vertical. Browse to search center, click on People vertical and type a name of the person or fire a wildcard query (* is a valid wildcard but is an expensive query). You should see results from both SharePoint On-premise and SharePoint Online.

Configure Conversations as Hybrid Search Vertical

To configure the Conversation vertical to return Hybrid search results we need to create a Result Source specifying a Query transformation as well as a Query rule. Browse to the search service application and under search administration click on Result Sources.

image

Click on New Result Source and fill in the below information.

General Information: Give a name to the result Source example SPO Conversations.

Protocol: Choose Remote SharePoint.

Remote Service URL: Type in the root site collectionUrl for your SharePoint On-premise site collection.

Type: Choose SharePoint Search Results.

Query Transformation: The query transformation plays a key role here. You can either copy the below text as is (Note If you are typing this on your own between {searchTerms?} and (ContentTypeId we need to add space). Alternately you can edit the Conversation which is available under the list of result sources for On-premise and copy the same details.

{searchTerms?} (ContentTypeId:0x01FD4FB0210AB50249908EAA47E6BD3CFE8B* OR ContentTypeId:0x01FD59A0DF25F1E14AB882D2C87D4874CF84* OR ContentTypeId:0x012002* OR ContentTypeId:0x0107* OR WebTemplate=COMMUNITY)

Credential Information: If you are configuring on your On-premise environment i.e you want Outbound

Hybrid then leave this as Default Authentication. In case you are configuring in SharePoint Online then choose SSO ID. Within the Reverse Proxy Certificate Secure Store ID type in the name of the same secure store as you created in the inbound Hybrid configuration specified here. Click on save to save the source.

If you edit the result source, the settings should look like the printscreen below.

image

Once result source is created, you can always test the result source like we did for People result source. If you see success you have been able to successfully configure the result source to query for Conversation information from SharePoint Online.

Next step would be to configure the Query rule for SPO Conversation result source that we just created. Browse to Search Service application from Central Administration. Under Query and Results section click on Query Rules. Within the Query rules “From what context do you want to configure rules “drop down choose the result source that you created above example SPO Conversations.

image

 

Click on New Query Rule and fill in the below information.

General Information: Give a name to the Query rule example SPO Conversations QR

Expand Context and click on Add Source. From Add Source drop down choosethe result source Conversations (System).

Remove any Query condition, since for this post we will just focus on out of box results.

Under Actions and within the Results Block section click on Add.

Leave Block Title defaults.

Within Query, under Search This Source choose the result source that we created above in this post and named as SPO Conversations.

Expand Settings and click on the radio button that states“This block always shown above core results”. This is to ensure that the results are definitely shown, since if you choose ranked the results may not show up due to others being ranked higher than the ones from SPO.

Choose defaults for Routing. Click on Ok.

Under Publishing choose Is Active check box is selected unless you explicitly want to have a future publishing date. Click save. If Query rule that I just created is edited you should see the settings match the picture below.

image

This completes the setup of Hybrid results in the enterprise search center Conversations Vertical. Browse to search center, click on Conversations vertical and type a name of the person or fire a wildcard query (* is a valid wildcard but is an expensive query). You should see results from both SharePoint On-premise and SharePoint Online

 

image

 

Configure Videos as Hybrid Search Vertical

To configure the Videos vertical to return Hybrid search results we need to create a Result Source, specify a Query transformation as well as a Query rule. Browse to the search service application and under search administration click on Result Sources.

image

Click on New Result Source and fill in the below information.

General Information: Give a name to the result Source example SPO Videos.

Protocol: Choose Remote SharePoint.

Remote Service URL: Type in the root site collectionUrl for your SharePoint On-premise site collection.

Type: Choose SharePoint Search Results.

Query Transformation: The query transformation plays a key role here. You can either copy the below text as is (Note: If you are typing this on your own between {searchTerms?} and (ContentTypeId we need to add space). Alternately you can edit the Video which is available under the list of result sources for On-premise and copy the same details.

{searchTerms?} {?path:{Scope}} {?owstaxIdMetadataAllTagsInfo:{Tag}} (ContentTypeId:0x0120D520A808* OR (SecondaryFileExtension=wmv OR SecondaryFileExtension=avi OR SecondaryFileExtension=mpg OR SecondaryFileExtension=asf OR SecondaryFileExtension=mp4 OR SecondaryFileExtension=ogg OR SecondaryFileExtension=ogv OR SecondaryFileExtension=webm))

Credential Information: If you are configuring on your On-premise environment i.e you want Outbound

Hybrid then leave this as Default Authentication. In case you are configuring in SharePoint Online then choose SSO ID. Within the Reverse Proxy Certificate Secure Store ID type in the name of the same secure store as you created in the inbound Hybrid configuration specified here. Click on save to save the source.

If you edit the result source, the settings should look like the print screen below.

 

image

Once result source is created, you can always test the result source like we did for People result source. If you see success you have been able to successfully configure the result source to query for Video information from SharePoint Online.

Next step would be to configure the Query rule for SPO Video result source that we just created. Browse to Search Service application from Central Administration. Under Query and Results section click on Query Rules. Within the Query rules “From what context do you want to configure rules “drop down choose the result source that you created above example SPO Video

image

Click on New Query Rule and fill in the below information.

General Information: Give a name to the Query rule example SPO Video QR

Expand Context and click on Add Source. From Add Source drop down choosethe result source Local Video Results (System).

Remove any Query condition, since for this post we will just focus on out of box results.

Under Actions and within the Results Block section click on Add.

Leave Block Title defaults.

Within Query, under Search This Source choose the result source that we created above in this post and named as SPO Videos.

Expand Settings and click on the radio button that states“This block always shown above core results”. This is to ensure that the results are definitely shown, since if you choose ranked the results may not show up due to others being ranked higher than the ones from SPO.

Choose defaults for Routing. Click on Ok.

Under Publishing choose Is Active check box is selected unless you explicitly want to have a future publishing date. Click save. If Query rule that I just created is edited you should see the settings match the picture below.

 

image

This completes the setup of Hybrid results in the enterprise search center Videos Vertical. Browse to search center, click on Vidoes vertical and type a name of the person or fire a wildcard query (* is a valid wildcard but is an expensive query). You should see results from both SharePoint On-premise and SharePoint Online.

 

 

image

Note besides testing the Result Source you can navigate to Query Rule and edit the Query rule for each of the ones we created above and fire a test query. You should see corresponding results from SharePoint Online and SharePoint On Premise for the corresponding vertical.

Our next post will talk about key troubleshooting steps that you can take if one of these configuration examples fails to return results

POST BY : Manas Biswas [MSFT] & Neil Hodgkinson [MSFT]

Configuring Microsoft Web Application Proxy Server (WAP) for Inbound Hybrid Topology with Office 365 and Microsoft SharePoint Server 2013 -Part7

$
0
0

This post is a companion to the earlier published part 2 of this hybrid configuration and deployment series. In this post we will replace the reverse proxy from Threat Management Gateway (TMG) as used in the previous post for Microsoft Web Application Proxy (WAP). Windows Server 2012 R2 includes WAP as a component of its Remote Access feature set and is recommended as a reverse proxy solution for SharePoint 2013 Server and for inbound hybrid scenarios.

At this point it is already assumed that the organization has completed a number of steps as defined in Part 1 of this hybrid series i.e.

1.> Subscribe to an Office 365 Tenant

2.> Configure Server to Server Trust with Windows Azure ACS

3.> Completed the Directory Synchronization steps

Additionally the organization should have deployed a SharePoint 2013 Server farm on premises and created a web application configured for Secure Sockets Layer (ssl) traffic using a Public Certificate Authority signed ssl certificate. Information on configuring SharePoint 2013 Server web application using ssl can be found here https://technet.microsoft.com/en-us/Library/ee806885.aspx .

Before we can configure SharePoint Online to display results from SharePoint Server 2013 on premises we must Install and Configure Microsoft Web Application Proxy server to accept incoming requests from SharePoint Online.

Once WAP is ready we can configure SharePoint Online to display results from SharePoint 2013 Server on premises. To do this we need to:

1> Configure a Secure Store Target Application

2> Create a Remote SharePoint Result Source

3> Create a Query Rule

In this post we will be focusing on the WAP Configuration and Deployment steps.

Microsoft Web Application Proxy Infrastructure Deployment

Microsoft Web Application Proxy (WAP) is available as a new Remote Access Role of Windows Server 2012 R2. Before we can install WAP we need to deploy an Active Directory Federation Services (ADFS) 3.0 Service which will serve as the Configuration Store and could optionally be used to support end user federated authentication if desired by the organization.

Deploy Active Directory Federation Services 3.0

Active Directory Federation Services 3.0 like WAP is a role that can be deployed on a Windows Server 2012 R2 server. ADFS requires a public ssl certificate to secure the Federation Services Public endpoint. In this case we have a certificate obtained from Digicert for the adfs.nellymo.com common name and have already loaded that into the local Computer Account Personal certificate store on the server where we will install ADFS. We will need this certificate later when we configure the ADFS service.

Install ADFS

Carry out the following steps to deploy ADFS for use with WAP

1> Add Windows Server 2012 R2 to the on premises domain.
Complete this task using your standard Corporate Policy.

2> Use server manager Add Roles and Features Wizard to install Active Directory Federation Services on the choose server roles selection page.

image

3> Accept defaults on the Add Features page.

image

4> On the ADFS install summary page review the provided information, specifically the information provided in the “things to note” section. We need to be domain joined which we are but also that Web Application Proxy cannot be installed on the same machine as the ADFS role for use as a federation proxy. We want to use WAP as a server publishing proxy but will still follow this guidance and install WAP on a separate server.

image

5> Click Next

 

image

6> Select Auto Restart and acknowledge the popup then click Install

image

 

7> Wait until installation is complete and machine reboots.

 

image

8> After installation there are some post install setup tasks to complete. Reload Server Manager and review the items presented on the notification flag.

image

9> Select “Configure the federation services on this server”

 

image

Default setting is to create the first federation server in a federation server farm. For the purpose of this installation this is going to be the first and only federation server but for production deployments we would recommend a minimum of two servers in the federation farm to support high availability.

10> Click next

 

image

11> Specify alternate credentials if necessary. In this case NELLYMO\spadmin is a domain admin and therefore has permission to perform federation services configuration. Click Next

 

image

12> Here we choose the SSL certificate uploaded earlier and provide a federation service display name. At this point you should also have the federation service name configured in your on premises DNS to provide name resolution to this federation server. If you are intending using this ADFS service as a federation service endpoint for federated authentication in Office 365 you should also ensure this endpoint is publically accessible and has a public DNS registration. Click Next

 

image

13> Provide a domain user account credential or managed service account to act as the ADFS Service Account. Click Next.

 

image

14> Provide a location to store the ASDF configuration database. When creating the ADFS database we can either use the local Windows Internal Database which is the default setting, or else we can choose a SQL Server Instance in the same domain. In this case I will use a SQL Server Instance on a separate machine.
Note. After clicking Next you may be prompted to overwrite an existing ADFS database instance if one exists. Act accordingly and be careful not to overwrite any current production configuration while deploying or testing this feature.
Click Next

 

image

15> Review the configuration selections. At this point we can also export the configuration as a Windows PowerShell script for future use. Click next

If we take a look at the Windows PowerShell script that was generated by this process it becomes clear how we can adapt this to install multiple servers in the same farm or deploy additional farms.

#Windows PowerShell script for AD FS Deployment

#

Import-Module ADFS

# Get the credential used for the federation service account

$serviceAccountCredential = Get-Credential -Message "Enter the credential for the Federation Service Account."

Install-AdfsFarm `

-CertificateThumbprint:"182602ECD225A7C66555465B889C7A5AE1099EDA" `

-FederationServiceDisplayName:"Nellymo Corporation" `

-FederationServiceName:"adfs.nellymo.com" `

-ServiceAccountCredential:$serviceAccountCredential `

-SQLConnectionString:"Data Source=hybrid-dcsql01;Initial Catalog=ADFSConfiguration;Integrated Security=True;Min Pool Size=20"

If we take a look at the Windows PowerShell script that was generated by this process it becomes clear how we can adapt this to install multiple servers in the same farm or deploy additional farms.

 

image

16> Pre-requisite checks are carried out automatically and must be passed before Installation can begin.
Click Configure if pre-requisite checks are successful. If checks are not successful review the Results pane for warnings r failures, remediate accordingly and rerun the prerequisite checks.

image

17> Install proceeds and you can smile when you see “This server was successfully configured”
Click close to exit the ADFS Configuration Wizard..

image

18> You can test ADFS is installed correctly by browsing to the ADFS sign in page here - https://adfs.nellymo.com/adfs/ls/idpinitiatedsignon.aspx

You have now successfully installed and configured ADFS for use in this scenario

Installing WAP on Windows Server 2012 R2

As this server will function as the gateway to your corporate domain, recommended security good practice is to ensure the server is not domain joined and should be located in a secure DMZ.

19> To begin installing WAP launch Server Manager and choose Add Roles and Features

image

20> In Server Roles, choose Remote Access and click Next

image

21> On the Feature selection panel accept the defaults, click Next

image

 

22> Review the details about the Remote Access Role and click Next

image

23> On the Roles Services selection panel choose Web Application Proxy and a popup will appear. Click Add Features to accept installation of the Remote Access administration tools.
Click Next after the PopUp disappears

 

image

 

24> Review and confirm the installation choices
Optionally select “Restart destination server if required” and accept the dialog popup.
Click Install to complete the installation.

image

25> Installation proceeds until complete and which point you can close the installation wizard.

26> After installation there are some post install setup tasks to complete. Reload Server Manager and review the items presented on the notification flag.

 

image

27> Select “Open the Web Application Proxy Wizard”

image

28> Review the configuration notes and Click Next

 

image

29> Enter the ADFS service name as entered during the ADFS installation phase and supply a username and password for an account that has admin rights to the ADFS Server.
Click next

image

30> Choose the ADFS public cert. Once again this cert must be loaded into the local computer account personal certificate store on the WAP Server in the same manner as it was with the ADFS setup.
Click next

image

31> As with the ADFS setup we get a Windows PowerShell script that can be used to deploy WAP on other machines
Click Configure

image

32> Configuration proceeds until complete.
Click close and the remote access console loads.

 

Configuring SharePoint publishing rules for Web Application Proxy

The Windows Web Application proxy supports publishing SSL enabled web applications using either pass-through, ADFS authentication or client certificate authentication. For our scenario we are going to look at configuring WAP to support client certificate authentication as this is the preferred mechanism to validate that an incoming search or BCS request is coming from a trusted location.

Important: The client pre-authentication certificate can be the same certificate used to establish the SSL channel, the only requirement is that Office 365 can validate the certificate publisher chain. The advantage to using a separate unique certificate though is that the client pre-authentication certificate is never available publically. This prevents unscrupulous individuals from trying to submit inbound hybrid requests and presenting a publically accessible certificate for client pre authentication. For testing purposes you can use either approach but Microsoft strongly recommends using a unique single purpose certificate for client authentication.

To publish a SharePoint web application to support inbound hybrid configurations we need to use a certificate for two processes.

· A certificate to establish the ssl channel. This is a public certificate that matches the external url of the published SharePoint web application. We will be using https://internet.nellymo.com in the scenarios to follow.

· A certificate to upload into the secure store in Office 365. This is presented during client pre-authentication when challenged by the WAP server.

In our scenario we will be using two different certificates as follows.

The first certificate is a one acquired from a well-known public certificate authority and is used to secure our externally published SharePoint Web Application https://internet.nellymo.com

The second certificate is also acquired from a well-known public certificate authority and is used for client pre-authentication. It has the common name userauth.nellymo.com.

Each of the certificates has been stored in the local computer account personal certificate store on the WAP server. In this example we will also have a local copy of the .pfx version of each certificate on the local filesystem for use in the PowerShell configuration script where we publish the web application.

image

 

To publish a SharePoint web application using client certificate pre-authentication you have to use PowerShell. WAP can only be configured to publish pass-through and ADFS secured web applications using the GUI.

33> Open PowerShell ISE as Administrator and use the following script to implement the reverse proxy publishing rule

 

As mentioned earlier the two certificates we are using to configure the WAP publishing rule have been copied to the local file system. The filepaths below should be updated to reflect the location on your own server where you have stored these certificates.

You should also modify the script to reflect the internal and external urls within your own environment. By default WAP expects you to have identical BackedServerUrl and ExternalUrl. If you have configured different Urls for these parameters then you will need to disbale URL translation for the publishing rule and add Alternate Access Mappings to the SharePoint web application. This is a more complex configuration and will be the subject of a future blog post.

#Get the thumbprint of the external URL certificate

$externalcert = Get-pfxCertificate -FilePath "c:\cert\internet_nellymo_com.pfx"

$externalcert.Thumbprint

#Get the thumbprint of the client pre-authentication certificate

$clientcert = Get-pfxCertificate -FilePath "c:\cert\userauth_nellymo_com.pfx"

$clientcert.Thumbprint

Add-WebApplicationProxyApplication `

-Name "Hybrid Inbound Rule" -BackendServerUrl "https://internet.nellymo.com" `

-ExternalUrl "https://internet.nellymo.com" `

-ExternalCertificateThumbprint $externalcert.Thumbprint `

-ExternalPreauthentication "ClientCertificate" `

-ClientCertificatePreauthenticationThumbprint $clientcert.Thumbprint

34> Execute the script and refresh the remote access console to review the published rule

image

This is the rule as published by WAP to the internet.

35> Validate the rule by navigating from an external web browser to the external published url with fiddler2 loaded to inspect the web traffic. Fiddler2 will respond with a dialog indicating a certificate challenge was received.

 

image

 

So now we have the web application published successfully and we need to configure Office 365 to send the certificate to the proxy when challenged for inbound hybrid requests.

Configure the Office 365 Secure Store

In order to configure the secure store you need to be a Global administrator on the Office 365 tenancy and have access to the certificate used for the client pre-authentication on the WAP server.

36> To configure the secure store, login to the tenant admin portal and open the SharePoint administration site.

image

37> Navigate to the Secure Store admin page and select New to create a new Secure Store Target Application.

 

image

38> Enter a name for the new application and set the credential fields to match the table below

Field Name

Field Type

Certificate

Certificate

Certificate Password

Certificate Password

Complete the other fields for providing administrative control over this application and also specify the group(s) of users who are mapped to this identity. The mapped users will able to consume this secure store application from SharePoint Online. In this case we are stating all users of the tenancy except external users can call this application.
Click OK to create and save the target secure store application

image

39> Highlight the new application in the secure store admin page and select Set Credentials
Browse to the same pfx certificate file used for the client pre-authentication when configuring the WAP publishing rule and supply the password used to encrypt the file.
Click OK

That concludes the ADFS, WAP and Secure Store configuration required to support inbound hybrid with WAP as a reverse proxy. The final stage is to configure the result source and query rules as you have seen in the earlier part 2 of this blog series on configuring inbound hybrid.

 

 

POST BY Manas Biswas [MSFT] & Neil Hodgkinson [MSFT]

Troubleshooting Certificate Validation errors for Inbound Hybrid Search with Office 365 and Microsoft SharePoint Server 2013 –Part 8

$
0
0

To understand how to configure Hybrid Search topologies see Part 1 and Part 2 of this series.Things are not always straightforward though and sometimes errors occur or mistakes are made when following this guidance.

The information in this post is generated while troubleshooting a test hybrid environment with SharePoint 2013 and Office 365. Some of the guidance below involves retrieving ULS logs for which you would have to engage Microsoft Support.

In this article we will describe how to troubleshoot certificate issues that occur in a hybrid search deployment of Microsoft SharePoint Online in Office 365 and of on-premises Microsoft SharePoint 2013 Server. For example when you try to submit a search query from Sharepoint Online to on premises SharePoint 2013 Server the search query fails with below error.

1¾System.Net.WebException: The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel. ---> System.Security.Authentication.AuthenticationException : The remote certificate is invalid according to the validation procedure

Examination of the Office 365 SharePoint Online ULS logs shows the following entries.

NodeRunner.exe (0x1194) 0x1B34 SharePoint Foundation Topology 8311 Critical An operation failed because the following certificate has validation errors: Subject Name: CN=spweb.mbiswas.com, OU= Online , O=Hybrid Corp, L=Bangalore, S=Blr, C=IN Issuer Name: CN=Test CA, DC=hybrid, DC=test, DC=dc, DC=mbcloud, DC=com Thumbprint: <certificate thumbprint> Errors: PartialChain: A certificate chain could not be built to a trusted root authority. RevocationStatusUnknown: The revocation function was unable to check revocation for the certificate. OfflineRevocation: The revocation function was unable to check revocation because the revocation server was offline.

NodeRunner.exe (0x1194) 0x0C94 SharePoint Foundation Topology 8311 Critical An operation failed because the following certificate has validation errors: Subject Name: CN=spweb.mbiswas.com, OU= Online , O=Hybrid Corp, L=Bangalore, S=Blr, C=IN Issuer Name: CN=Test CA, DC=hybrid, DC=test, DC=dc, DC=mbcloud, DC=com Thumbprint: <certificate thumbprint> Errors: PartialChain: A certificate chain could not be built to a trusted root authority. RevocationStatusUnknown: The revocation function was unable to check revocation for the certificate. OfflineRevocation: The revocation function was unable to check revocation because the revocation server was offline. .

NodeRunner.exe (0x1194) 0x1B34 SharePoint Server Search Query ajhxa High RemoteSharepointProducerSystem.Net.WebException: The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel. ---> System.Security.Authentication.AuthenticationException: The remote certificate is invalid according to the validation procedure. at System.Net.Security.SslState.StartSendAuthResetSignal(ProtocolToken message, AsyncProtocolRequest asyncRequest, Exception exception)

During the configuration of inbound Hybrid search you will have setup the Reverse Proxy publishing rule for client certificate pre-authentication and uploaded a copy of the SAN or wildcard certificate to the Secure Store in SharePoint Online, as well as to the local computer account personal store on the Web Application Proxy server. The above error is thrown due to an invalid or unreachable AIA (Authority Information Access) location specified on this certificate. An AIA location is a url indicating the publication source of the certificate used for verification purposes.

You can inspect the certificate yourself by downloading a public copy of the certificate by browsing to the published SharePoint On prem URL. If you verify the certificate using the certutil tool you should see an error similar to the example below

<certificate thumbprint>

Missing Issuer: CN=Test CA, DC=hybrid, DC=test, DC=dc, DC=mbcloud, DC=com

  Issuer: CN=Test CA, DC=hybrid, DC=test, DC=dc, DC=mbcloud, DC=com

  NotBefore: 05/04/2014 00:45

  NotAfter: 05/04/2015 00:45

  Subject: CN=spweb.mbiswas.com, OU= Online , O=Hybrid Corp, L=Bangalore, S=Blr, C=IN

  Serial: 64813e6d99970000066a

  SubjectAltName: DNS Name=spweb.mbclould.com, DNS Name= spwebmail.mbclould.com, DNS Name= sharepoint.mbclould.com, DNS Name=App.mbcloud.com 

A certificate chain could not be built to a trusted root authority. 0x800b010a (-2146762486 CERT_E_CHAINING)

Incomplete certificate chain

Cannot find certificate:

   CN=Test CA, DC=hybrid, DC=test, DC=dc, DC=mbcloud, DC=com is an End Entity certificate

ERROR: Verifying leaf certificate revocation status returned The revocation function was unable to check revocation because the revocation server was offline. 0x80092013 (-2146885613 CRYPT_E_REVOCATIO

N_OFFLINE)

CertUtil: The revocation function was unable to check revocation because the revocation server was offline.

If your analysis shows a similar output, you need to validate why the AIA location(s) specified on the certificate are not accessible. You need to work with the certificate issuer to determine why these AIA location are unavailable publically. Alternatively, if you are able to obtain a certificate with a valid publically accessible AIA location, this should fix the issue.

The current list of Microsoft Windows supported root certificate authorities can be found here.

This article is the first of the series of Hybrid troubleshooting posts . Watch this space for more troubleshooting tips and Tricks around Hybrid.

POST By : Manas Biswas [MSFT] & Neil Hodgkinson [MSFT]

Infopath & Postback settings in SharePoint

$
0
0

 

Some InfoPath form controls, actions, and features require the browser to communicate with the server during a form session. This exchange of data during the session is called a postback, and usually occurs when a form feature has to send data to the server for processing. Unnecessary postbacks impose an additional load on both the browser and the server. To protect the server, a threshold is set for the maximum number of postbacks per session. This limits the number of postbacks that can be executed during a single session when a user is filling out a form, and prevents malicious users from trying to bring down the server.

For SharePoint On-premise following User Session Settings are configurable by Using Central Administration. As Described in http://technet.microsoft.com/en-us/library/cc262263.aspx#authenticate

- Thresholds

[Number of postbacks per session] --Default is 75

[Number of actions per postback] --Default is 200

- User Sessions

[Active sessions should be terminated after] -Default is 1,440 Minutes

[Maximum size of user session data]-Default is 4096 Kb

In case of SharePoint Online (a.k.a Office 365 ), these settings are not available in Tenant or SharePoint Admin site . As it is a shared Farm Architecture , they have been set the Default limits as Hard limits to avoid Perf Degradations & prevent DOS Attacks.

If you're seeing this on your SharePoint Online Deployed Forms , You need to Re-Design your forms to resolve the issues seen  . Here is a Blog Series Microsoft's InfoPath team which are worth a read , providing guidance

Designing browser-enabled forms for performance in InfoPath Forms Services (Part 1)

Designing browser-enabled forms for performance in InfoPath Forms Services (Part 2)

Designing browser-enabled forms for performance in InfoPath Forms Services (Part 3)

Designing browser-enabled forms for performance in InfoPath Forms Services (Part 4)

Additional References :

https://technet.microsoft.com/en-us/library/ee513119(v=office.14).aspx

 

Post by : Rajan Kapoor [MSFT]


SharePoint 2013 : Cookie dropped from Distributed Cache with Event ID: air4a

$
0
0

You experience logon failures with SharePoint 2013 such as “access denied” errors or redirection to a logon page. When this issue occurs, the following logs appear in the SharePoint ULS logs:

01/09/2015 01:16:15.94 w3wp.exe (0x1D58) 0x27EC SharePoint Foundation DistributedCache air4a Monitorable Token Cache: Failed to get token from distributed cache for '0e.t|DOM|4019c581-8990-4729-b9a3-779f6bbf3ee3'.(This is expected during the process warm up or if data cache Initialization is getting done by some other thread).

01/09/2015 12:12:17.39 w3wp.exe (0x19C8) 0x1FFC SharePoint Foundation DistributedCache air37 Monitorable Token Cache: Reverting to local cache to Add the token for '0).w|s-1-5-21-2076828631-2543552492-1570323127-500'. 0328de9c-0cba-10fd-9061-7a15457058e0

CAUSE

Users intermittently lose their token from the Distributed Cache caused by the Application Pool being recycled. This behavior is by design. In this scenario, the code logic follows this path:

1. Find the cached token

2. Verify that the SPDistributedSecurityTokenCacheInitializer is initialized

3. If SPDistributedSecurityTokenCacheInitializer is not initialized, look for the token in the local cache

4. If the token is not found in local cache, generate logging

This issue occurs when the Application Pool is recycled on a SharePoint Front End Server (WFE) and the site has not been accessed after the recycle occurred. The Distributed Cache will not initialize for that WFE server until the site is accessed from the WFE, so this issue will be intermittent on a per-server basis. As soon as the Application Pool is warmed up, Distributed Cache will be initialized until the next recycle.

WORKAROUND

Configure the site by using warm up scripts to force Distributed Cache initialization after an Application Pool is automatically recycled.

MORE INFORMATION

Example of Warm up PowerShell script:

Get-SPWebApplication | ForEach-Object { Invoke-WebRequest $_.url -UseDefaultCredentials -UseBasicParsing }

This script should be scheduled as a task on each WFE Server in the farm that runs after the scheduled application pool recycles.

 

POST BY :MIKE LEE [MSFT]

SharePoint 2010 /2013: “An exception of type Microsoft.SharePoint.Administration.SPUpdatedConcurrencyException was thrown” while installing an update

$
0
0

You try to install cumulative updates for SharePoint 2010 and 2013, and you encounter this error:

An exception of type Microsoft.SharePoint.Administration.SPUpdatedConcurrencyException was thrown.  Additional exception information: An update conflict has occurred, and you must re-try this action. The object SPUpgradeSession Name=Upgrade-DATE-TIME-RAND was updated by DOMAIN\user, in the PSCONFIG (PID) process, on machine MACHINENAME.  View the tracing log for more information about the conflict.

CAUSE

This exception can be caused by a race condition (at least two threads access shared data and try to change it concurrently) when Psconfig tries to update sites, tables, or databases. This scenario is a rare occurrence during a SharePoint upgrade.

WORKAROUND

Use the following PowerShell script to open Psconfig with single processor affinity to force the utility to run one update at a time:

SP 2010

$cmd="start "+""""" /affinity 1 "+"""C:\Program Files\Common Files\microsoft shared\Web Server Extensions\14\BIN\Psconfig.exe""" + " -cmd" +  " upgrade" + " -inplace" + " b2b" + " -wait"
cmd.exe /c $cmd

SP 2013

$cmd="start "+""""" /affinity 1 "+"""C:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\BIN\Psconfig.exe""" + " -cmd" +  " upgrade" + " -inplace" + " b2b" + " -wait"
cmd.exe /c $cmd

To verify that this process opened by using single core affinity only, follow these steps:

1. Open the Task Manager

2. See details

3. Locate the running process for Psconfig

4. Right-click the process and select Set Affinity: you should see only CPU 0 selected

5. Let the Psconfig process complete

MORE INFORMATION

The workaround script configures single core affinity without using the -force parameter. This -force parameter ignores errors or skips steps that are required for SharePoint upgrades to be installed. In this scenario, two processes try to use the same resources while they are running on different server cores. Configuring for single core affinity prevents these processes from using mutual resources.

 

POST BY : Paul Haskew [MSFT]

The all new Cloud Search service application coming to SharePoint 2013 and SharePoint 2016

$
0
0

 

The all new Cloud Search service application coming to SharePoint 2013 and SharePoint 2016

We would like to make an announcement regarding the new hybrid capability being delivered as an update by the end of the year for SharePoint Server 2013 and natively in SharePoint Server 2016. The new feature is the Cloud Search service application and it enables the crawling of on-premises content sources to feed the Office 365 search index. This creates a truly unified search experience for users in a hybrid environment.

The new Cloud Search service application allows organizations to leverage the unified index approach and reduce the size of their on-premises search deployment. Take a look at the session below to learn about considerations when configuring hybrid search as well as deployment guidelines. Keep an eye on this space for our post on best practices and scaling guidelines

Below is a quick summary of the features that are part of the pre-release version that is being validated in a limited TAP program today.

1. The Cloud Search service application will support crawling of on-premises content to build one unified index in the cloud.

2. All existing content sources that are supported in SharePoint Server 2013 will be supported for crawling by the Cloud Search service application.

3 The unified index will enable authenticated users to search from SharePoint Online and return results that include items from both on-premises content and online content without the need for implementing query federation.

4. Authenticated users will also be able to query from SharePoint Server 2010, 2013 or 2016 on-premises for content from the SharePoint Online unified search index. This can be done by publishing the SharePoint Cloud Search service application and consuming it from a SharePoint Server 2010 farm. In addition you configure Hybrid Query federation from the Cloud Search service application to SharePoint Online. As a result, both site search and the enterprise search center works and users can see results from the unified search index in SharePoint Online.

5. Query federation continues to be supported in SharePoint Server 2013 for both the inbound and outbound model

Posted Created By : Manas Biswas [MSFT] , Neil Hodgkinson [MSFT] Kathrine Hammervold [MSFT]

ERR_config_Db while starting the UPA Synchronization service in SharePoint 2010/2013

$
0
0

Post configuring the User profile Service Application , You need to Start the Synchronization service if you intended to use ForeFront Identity Manager Component in Sharepoint 2010/2013 which provides abilities to Import / Export data from a variety of Directory sources & Augment profile data from Additional Sources like BCS etc. The sync service Startup has many known issues & bunch of issues which might cause the Sync Service startup failure .

 

This Post is to provide causes of Event ID 9i1w "ERR_Config_Db" as seen in ULS logs post trying to start the Sync Service

[Date and Time] OWSTIMER.EXE (0x1DA0) 0x1DA4 SharePoint Portal Server User Profiles 9i1w Medium ILM Configuration

ILM Configuration: Configuring database
ILM Configuration: Error 'ERR_CONFIG_DB'.
UserProfileApplication.SynchronizeMIIS: Failed to configure MIIS post database, will attempt during next rerun. Exception: System.Configuration.ConfigurationErrorsException: ERR_CONFIG_DB
at Microsoft.Office.Server.UserProfiles.Synchronization.ILMPostSetupConfiguration.ValidateConfigurationResult(UInt32 result)
at Microsoft.Office.Server.UserProfiles.Synchronization.ILMPostSetupConfiguration.ConfigureMiisStage2()
at Microsoft.Office.Server.Administration.UserProfileApplication.SetupSynchronizationService(ProfileSynchronizationServiceInstance profileSyncInstance).

Note : Keep in mind for UPA Config db means Synchronization DB as all the configuration details is stored Here .

 

Possible Cause 1: This is due to insufficient privileges for the SharePoint Farm Account on the Sync DB (not the Config DB!). You need to add the farm account to the Sync DB users as DBO with a default schema of DBO and then start UPS again.

Possible Cause 2: Incorrect Pre-requisites installed , i.e SQL server Native Client 2012 /2014 is installed , while the Requirement is to have a SQL 2008 /2008 Sp1 . Details here : http://technet.microsoft.com/en-us/library/cc262485.aspx#reqOtherCap

Possible Cause 3: If you use SQL server 2014 to host the Sync DB or Moved your Databases to SQL 2014 ( Sharepoint 2013 ) , then there is a known issues & you need to Update to April 2014 CU or above for Sharepoint 2013 , https://support.microsoft.com/en-us/kb/2760265

Possible Cause 4: Connectivity issues to SQL server , like Wrong Alias . Start -->Run-->Cliconfg to check & configure aliases

Possible cause 4 : Using FDQN of SQL server while creating the SharePoint server Farm .

POST BY :RAJAN Kapoor [MSFT]

SharePoint 2013: SharePoint Designer Workflow in Suspended or Canceled State

$
0
0

 

Symptoms

Consider the following scenario:

You publish a SharePoint 2013 Designer Workflow associated with a SharePoint list. A user without edit permission to that SharePoint list then starts a workflow instance on the list. On the Workflow Information page, users see the status for the initiated workflow as suspended or canceled. The Workflow Information page displays an error message that resembles the following:

RequestorId: ac01a77b-619a-dbd2-0000-000000000000. Details: An unhandled exception occurred during the execution of the workflow instance. Exception details: System.ApplicationException: HTTP 401 {"Transfer-Encoding":["chunked"],"X-SharePointHealthScore":["0"],"SPClientServiceRequestDuration":["864"],"SPRequestGuid":["ac01a77b-619a-dbd2-aa49-a4d2580be234"],"request-id":["ac01a77b-619a-dbd2-aa49-a4d2580be234"],"X-FRAME-OPTIONS":["SAMEORIGIN"],"MicrosoftSharePointTeamServices":["15.0.0.4631"],"X-Content-Type-Options":["nosniff"],"X-MS-InvokeApp":["1; RequireReadOnly"],"Cache-Control":["max-age=0, private"],"Date":["Mon, 12 Jan 2015 20:03:16 GMT"],"Server":["Microsoft-IIS\/8.5"],"WWW-Authenticate":["NTLM"],"X-AspNet-Version":["4.0.30319"],"X-Powered-By":["ASP.NET"]} at Microsoft.Activities.Hosting.Runtime.Subroutine

SubroutineChild.Execute(CodeActivityContext context) at System.Activities.CodeActivity.InternalExecute(ActivityInstance instance, ActivityExecutor executor, BookmarkManager bookmarkManager) at System.Activities.Runtime.ActivityExecutor.ExecuteActivityWorkItem.ExecuteBody(ActivityExecutor executor, BookmarkManager bookmarkManager, Location resultLocation)

Cause

When the Automatically update the workflow status to the current stage name option is selected in SharePoint Designer, SharePoint must edit the list page to change the workflow status. A user needs edit permissions on the list to edit the workflow status. If a user does not have edit permissions on the list, the workflow enters a suspended state.

Resolution

Option 1

Clear the Automatically update the workflow status to the current stage name option in the Settings > General settings for this workflow SharePoint Designer panel.

 image

Option 2

Grant users edit permissions on the list.

Option 3

Add an App Step with a Set workflow status to at the beginning of each workflow stage. Follow these steps:

1.Configure your environment for SharePoint Apps. Visit Install and manage apps for SharePoint 2013for more information.

2. Enable the App Step by going to Site Actions > Site Settings > Site Actions > Manage Site 

        Features > Workflows can use app permissions.
image

3. Open SharePoint Designer and edit the affected workflow. At the beginning of each stage, insert an App Step.

image

image

 

4.In the App Step, add Set workflow status toInsert Stage Name here.

5. Clear the Automatically update the workflow status to the current stage name option in SharePoint Designer’s General settings for this workflow pane.

image

6. Save and publish the workflow.

The App Step will now use the permissions of the SharePoint Apps to change the workflow status instead of using the user’s permissions.

 

Note. With Option 3 the Modified By column no longer shows the user who started the workflow but instead the SharePoint App account that changed the workflow status. To work around this, add the created by column to the list view to discover which user created the request.

image

Post By : Charmaine Christie-Primo [MSFT]

SharePoint 2013: Office documents Search crawl message indicating "This item was partially parsed."

$
0
0

 

Symptom

Consider the following scenario:

You start a Search Service Application crawl in the SharePoint Server 2013 environment. Crawled Office documents that contain embedded images report this crawl warning:

This item was partially parsed. The item has been truncated in the index because it exceeds the maximum size.

Cause

The default SharePoint Search document parser extracts the contents from an Office document when the crawler processes a file. If the parser encounters an embedded photo, the associated schema reference shows the presence of a picture which the crawler passes over. With embedded images such as bitmaps, the associated image is identified as a shape without an associated schema. This generates the crawl warning message, because the parser cannot determine how to process the item. The current feature set for native, built-in handlers for Office documents in SharePoint Server 2013 does not support embedded images.

Resolution

Note. The warning message indicates that the embedded items are not processed. However, the remaining text and properties are still crawled and are searchable.
With a minimum server update level of July 2014 CU (15.0.4631.1001) for SharePoint Server 2013, you can use a third-party iFilter of your choice to parse bitmaps and override the built-in SharePoint 2013 handler.  
The Filter packs built natively in 2013 do not have the full functionality of the filter packs from SharePoint 2010 and FAST. The Office 2010 Filter pack referenced shown here includes the ability to parse embedded images effectively and can be used to override the out-of-box iFilter for Office documents:
Microsoft Office 2010 Filter Packs

To enable iFilter support, use the commandlet Set-SPEnterpriseSearchFileFormatState which has the switch –UseIFilter for this purpose. The full command to switch to the iFilter follows.

Note: Use a similar command for any iFilter that you want to replace after you install the iFilter.

Step 1:

$ssa = Get-SPEnterpriseSearchServiceApplication

Get-SPEnterpriseSearchFileFormat -SearchApplication $ssa -Identity docx

Step 2:

Set-SPEnterpriseSearchFileFormatState -SearchApplication $ssa -Identity docx -UseIFilter $true -Enable $true

Step 3:

On each server that hosts the Content Processing component, the Search Host Controller service must be restarted to accept the changes. Use the following procedure:

net stop spsearchhostcontroller

net start spsearchhostcontroller

After you complete these steps, start a full crawl. The documents that were generating the warning before should now be displayed as crawled successfully by using the Office 2010 Filter pack for embedded images.
If at some point there is an update to the existing filter packs built natively to support crawling embedded images, disable the installed, overridden iFilters using the previously mentioned PowerShell commands but setting each file type to –UseIFilters $false

More Information

Implement a custom iFilter in SharePoint 2013

Post By :  Johanson Sandrasagra [MSFT]

SharePoint 2013 : Site Usage Analytics Report shows no values when crawling a non-default Zone URL for the Web-application

$
0
0

 

Usage analytics is a set of analyses that receive information about user actions, or usage events, such as clicks or viewed items, on the SharePoint site. Usage analytics combines this information with information about crawled content from the Search analyses, and processes the information. Information about recommendations and usage events is added to the search index. Statistics on the different usage events is added to the search index and sent to the Analytics reporting database.

Came across this situation with another customer's deployment where Site usage Analytics reports don't work for one Web-application in the farm , while it worked fine on others .

For an overview of how Request Usage data is processed ,

1. Usage Events are written into .usage files that exist by default at C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\15\LOGS\RequestUsage

2. Microsoft SharePoint Foundation Usage Data Import  - Timer job , updates the wss_logging database & processes data to update the so called Event store, which has folders for each day .

3. Event Store Folder exists on those servers which Run the Search 2013 Analytics Component & location as seen in my setup is at

"C:\Program Files\Microsoft Office Servers\15.0\Data\Office Server\Analytics_81c88675-044e-4c81-810e-9c8e001405cf\EventStore\20130705"

image

 

4. Microsoft SharePoint Foundation Usage Data Processing -Daily Timerjob which processes the Event store data & populates the Search Reporting database from where reports are pulled .

If you happen to see the entries in Event Store files , you will notice that all requests show up as Default Zone URL as that's how we store them there even if the end user request was to any other zone url of the Web-application. Which goes with the Guidance we have for crawling the "Default Zone" url of the web-application as Mentioned here

Best practices for crawling in SharePoint Server 2013

So the Reason this was not working was that we had the Intranet Zone URL in the search content source so the crawled items in index did not match the entries in Event Store & hence the Analytics processing would not get any output for that raw Dataset. All the Way one more reason to always crawl the Default Zone URL in search Content Source .

Here is another blog which explains other problems which may arise because of crawling Non-default Zone URL's .

Problems Crawling the non-Default zone *Explained

POST By : Rajan Kapoor [MSFT]


SharePoint Check Permissions on site permissions page works intermittently with SAML claims Auth..

$
0
0

 

Came Across an interesting situation , where when we use "Check Permissions" to find out permissions of user on a SharePoint site , it would Show as "None" & then work occasionally work without any changes being done to permissions in SharePoint or User account . This started happening after customer Implemented ADFS for SAML authentication on this Specific Web-application .

Here is the scenario for the setup

You setup ADFS SAML authentication for a Sharepoint Web-application. The ADFS is configured to use LDAP attributes as claims with following being used

Identity claim : Email Address

Role : Token-groups -Unqualified Names

Permissions were defined by Means of End users being added to Active Directory groups & these being indeed added to Inbuilt Sharepoint Groups .

Note : While adding an AD group to a Sharepoint Group , AD Groups Role Claims was Selected.

Interestingly One day you go check effective permissions for a User they show up but the very next day they don't .

Here is what is happening in the background …

When we perform the check permissions, we call the spite constructor with the user token we have for the desired permission check. Certain information is passed through by the Identity provider (like group memberships) and only exists on the SharePoint side in the token.

This token will not contain the group membership if either the user is not in the userinfo table --- ( Hasn't logged in Yet to the site )

or

if the ExternalToken stored by SharePoint is expired or incomplete (does not contain group membership). The only way to rebuild the token is by sending the user on a round trip to ADFS to re-authenticate

So interestingly, if the user has signed in Recently enough , then there will be an existing token which can be used for performing the check, otherwise it returns that there are no permissions..

Following interesting observations come out of this explanation

Scenario1 : New User : You Create a new User in AD & add to required AD group which has been added to SharePoint group & go check his permissions .

Observation: Check Permissions fails till the user logs on to the site & once logged in then continues to work for that day.

Scenario 2 : Modified Group Membership : Added or Removed a User from a Group to Provide /Remove additional permissions

Observations: Actual Permissions or Access works as Expected , but Check Permissions does not show updated Membership till about a day ( 24 hrs from last logon) & Shows updated Permissions only when user logs in the next day as token gets refreshed .

Scenario 3 : No changes : If the user shows Required permission on the Site as Seen by check permission today , Trying again tomorrow it may show "None" again .

Observation: This user does not log in for couple of days , Check Permissions with show "None" till he logs back in,,

Note : This Issue does not occur with Windows Classic , Windows Claims or Forms Mode Authentication for the Web-application . This is same for SharePoint 2010 & 2013 .

 

POST BY :Rajan Kapoor [MSFT]

SharePoint 2013: How to Rebalance Links across Crawl Stores ?

$
0
0

 

Issue

When using Central Administration to view content sources for the Search Service Application the content source history and content source pages takes a very long time to load. Crawls might hang with WCF related messages in the ULS logs or will try to run Rebalance Crawl store partition job.

Similar issue has been claimed to have been resolved in April 2011 CU for SharePoint 2010 via this Knowledge Base article

Symptoms:

As the content in the content source increases, the time it takes for the content sources page will increase. This is a good litmus test.

We can also query the Crawl Store database against the MSSCRAWLURL for the number of links against the content sources and Host ID

The ULS Logs will indicate the following sort of message in the Database category with event ID fa43

 

04/16/2015 08:25:14.47 mssearch.exe (0x3CAC) 0x1678 SharePoint Server Search Crawler:Common ad9s8 Assert location: search\libs\utill\hangrecoverer.cxx(189) condition: !"Crawl hangs" StackTrace: at Microsoft.Office.Server.Native.dll: (sig=c3cf111a-d6fc-4ec5-ba1c-c9b8ebb47e41|2|microsoft.office.server.native.pdb, offset=1F074) at MSSrch.dll: (sig=bcc79b69-8dc0-4f7b-b42b-469030237946|2|mssrch.pdb, offset=22759) at MSSrch.dll: (offset=226FD) at MSSrch.dll: (offset=1DE54) at ntdll.dll: (sig=9d04eb0a-a387-494f-bd81-ed062072b99c|2|ntdll.pdb, offset=16CEB) at ntdll.dll: (offset=16405) at ntdll.dll: (offset=16BF2) at kernel32.dll: (sig=c4d1d906-5632-4196-99a8-b2f25b62381d|2|kernel32.pdb, offset=1652D) at ntdll.dll: (sig=9d04eb0a-a387-494f-bd81-ed062072b99c|2|ntdll.pdb, offset=2C541)

One will also see the rebalance job running against owstimer for the crawl partitions.

04/16/2015 09:07:12.28 OWSTIMER.EXE (0x390C) 0x2164 SharePoint Foundation Monitoring nasq Medium Entering monitored scope (Timer Job Rebalance crawl store partitions for bf01da64-8d00-441e-add5-79ab3741a479). Parent No 4a85179d-420b-e0df-4af4-51bc9809ff48

Solution

In SharePoint 2013 however we need to balance the SharePoint Crawl Store links content every now and then. Adding a new Crawl store DB doesn’t help the situation as well. Crawl stores maintain an indicative threshold called Imbalance threshold and a method that replies with a Boolean as to whether or not the Crawl Store is Imbalanced.

 

$SSA = Get-SPEnterpriseSearchServiceApplication

$CrawlStorePartitionManager = new-Object Microsoft.Office.Server.Search.Administration.CrawlStorePartitionManager($SSA)

$CrawlStoreParititonManager.CrawlStoresAreUnbalanced()

True

The Search Service Application has a Method called CrawlStoresAreUnbalanced that gives us a boolean indication of whether or not the Crawl Store Links are full enough to do this rebalancing at all.

$CrawlStoreParititonManager.CrawlStoresAreUnbalanced()

False

Note: 1 Million is the default value for CrawlStoreImbalanceThreshold. In case they are unbalanced, we would know immediately.

It’s Property bag also has a Property called CrawlStoreImbalanceThreshold which can be obtained and modified using the GetProperty and SetProperty methods.

$SSA.GetProperty("CrawlStoreImbalanceThreshold")

$SSA.SetProperty("CrawlStoreImbalanceThreshold",10000)

Increasing Crawl stored for various hosts also helps to improve performance

Just increasing the number of crawl stores does not automatically balance the Links in the MSSCrawlURL across the hosts. Fortunately for us, Sharepoint gives us a Property to manage this as well. There is a job called “Rebalance crawl store partitions” that runs to perform this for us.

This runs automatically when some thresholds are attained called CrawlPartitionSplitThreshold. The default value for rebalancing to start is 10 Million items but it can be done earlier based on our needs.

When we make the above change, we can test this value against the registry key under CatalogNames with the same name CrawlPartitionSplitThreshold

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Office Server\15.0\Search\Applications\<SSAGUID>-crawl-0\CatalogNames

image

 

We can call a method called BeginCrawlStoreRebalancing from the CrawlStorePartitionManager Object we created some time back

 

$SSA = Get-SPEnterpriseSearchServiceApplication

$CrawlStorePartitionManager = new-Object Microsoft.Office.Server.Search.Administration.CrawlStorePartitionManager($SSA)

$SSA.GetProperty("CrawlPartitionSplitThreshold")

10000000

$SSA.SetProperty("CrawlPartitionSplitThreshold",50000)

$CrawlStorePartitionManager.BeginCrawlStoreRebalancing()

Note: The Balance job doesn’t run unless the threshold is crossed. Even if we force the rebalance to run, the actual rebalancing happens only if the threshold is met. So setting the value for CrawlPartitionSplitThreshold is critical.

We can achieve the same effect by forcing the Rebalance crawl store partitions job to run by following powershell commands.

 

image

(Get-SPTimerJob |where {$_.Name -match "Rebalance"} ).RunNow()

POST BY : Ramanathan Rajamani [MSFT]

SharePoint 2013: Crawl database grows because of crawl logs and cause crawl performance.

$
0
0

 

The crawl log contains information about the status of what was crawled. This log allows to verify whether crawled content was added to the index successfully, whether it was excluded because of a crawl rule, or whether indexing failed because of an error. Additional information about the crawled content is also logged, including the time of the last successful crawl, the content source (there could be more than one), the content access account used, and whether any crawl rules were applied.

Crawl information is stored in following tables of CrawlStoreDB database:

•“Msscrawlhostlist” table contain hostname with hostid.

•“MssCrawlHostsLog” table stores the hosts of all the URLs processed in the crawl.

•“MssCrawlUrlLog” table keeps track of the history of errors encountered in the crawls.

These logs are helpful as primary troubleshooting tools for determining the cause of problems. They are first indication of problems on sites such as why the crawler is not accessing certain documents or certain sites. On the other hand, it also tells if an individual document has been crawled successfully.

Please Note: Crawl Logs and History get cleared if an index reset is done.

Symptom:

Search DB is growing, Crawl Logs and history keeps growing. If content is huge, crawl logs may increase to GBs.

· Poor crawl page performance

· Crawls take longer time to complete.

· Crawl logs take longer time to update.

· Search DB growing in size for same or similar number of indexable items

· Crawled properties take longer time to convert to managed properties.

SharePoint ULS Logs contains highlighted messages

10/24/2013 13:36:17.32  mssdmn.exe (0x1048) 0x1E54 SharePoint Server Search PHSts  dvi9 High ****** COWSSite has already been initialized, URL http://contoso/customer/test, hr=80041201

10/24/2013 13:36:17.32  mssdmn.exe (0x1048) 0x1E54 SharePoint Server Search PHSts dv63 High CSTS3Accessor::InitURLType: Return error to caller, hr=80041201   

10/24/2013 13:36:17.32  mssdmn.exe (0x1048) 0x1E54 SharePoint Server Search PHSts dv3t High CSTS3Accessor::InitURLType fails, Url sts4://contoso/siteurl=customer/test/siteid={9aaac17f-88bc-45c1-8bbd-362bcaf6c0e4}/weburl=/webid={1cdedb2d-9524-4ba7-b73e-6bbe737857e0}/listid={5880f808-7af4-4974-b378-4ce2a3af9177}/folderurl=/itemid=47, hr=80041201 

10/24/2013 13:36:17.32  mssdmn.exe (0x1048) 0x1E54 SharePoint Server Search PHSts dvb1 High CSTS3Accessor::Init fails, Url sts4://contoso/siteurl=customer/test/siteid={9aaac17f-88bc-45c1-8bbd-362bcaf6c0e4}/weburl=/webid={1cdedb2d-9524-4ba7-b73e-6bbe737857e0}/listid={5880f808-7af4-4974-b378-4ce2a3af9177}/folderurl=/itemid=47, hr=80041201

Crawl Logs deletion:

The Search Service Application property which decides the retention of the crawl logs is “CrawlLogCleanUpIntervalInDays”. The default value for this property is 90 days.

It can be modified using the following Powershell commands

$ssa = Get-SPServiceApplication –Name “Search Service Application”

$ssa.CrawlLogCleanUpIntervalInDays = 30

$ssa.Update()

 

Once this is done, all you need to do is run the timer job responsible for cleaning the crawl logs: Crawl Log Cleanup for Search Application <<Search Service Application>>

You can manually run this job from Central Admin.

 image

 

We can also run this job through Powershell:

$job = Get-SPTimerJob | where-object {$_.name -match "Crawl log"}

$job.runnow()

 

POST BY : DIVYA AGARWAL [MSFT]

SharePoint 2013 - PSCONFIG is mandatory after installing July 2015 CU for SharePoint Server 2013.

$
0
0

 

With July 2015 CU for SharePoint Server 2013, we have an important update w.r.t. the database schema changes for the analytics reporting database. The change was required to support URLs with up to 4000 characters length.

It is important to note here that the crawler now expects the database schema to be updated and will fail to crawl if PSCONFIG (Wizard or Command) did not run to update the database schema.

In other words: unlike other CU's, it is important to run PSCONFIG right after installing the July 2015 CU binaries.

Note: If you do not run PSCONFIG (Wizard or Command), crawling will no longer work.

Another important thing to note here is that if you have a farm where you want to install any CU later than July 2015 CU, but have not installed July 2015 CU, same step of running PSCONFIG (Wizard or Command) is required else Search Crawls will not work.

This information is already updated in below articles.

https://support.microsoft.com/en-us/kb/3054933 July 14, 2015, cumulative update for Project Server 2013 (KB3054933)

https://support.microsoft.com/en-us/kb/3054937 July 14, 2015, cumulative update for SharePoint Server 2013 (KB3054937)

http://blogs.technet.com/b/stefan_gossner/archive/2015/07/15/important-psconfig-is-mandatory-for-july-2015-cu-for-sharepoint-2013.aspx

 

POST BY: ANOOP PRASAD [MSFT]

SharePoint 2013: Support of App Fabric 1.1 Ending on 2 April 2016, Is SharePoint affected??

$
0
0

 

As it is known one of the Software Requirement of SharePoint 2013 is Microsoft AppFabric 1.1 , which provides the functionality of distributed cache in SharePoint .

https://technet.microsoft.com/en-us/library/cc262485.aspx#section4

Cumulative Update Package 1 for Microsoft AppFabric 1.1 for Windows Server (KB 2671763)

There was a recent announcement by Microsoft ending the support of App Fabric 1.1 on 2nd April 2016 (http://blogs.msdn.com/b/appfabric/archive/2015/04/02/windows-server-appfabric-1-1-ends-support-4-2-2016.aspx ) has generated much discussion on how that impacts the supportability for SharePoint 2013 today and SharePoint 2016 in the future.

It is important that Microsoft supports its products for the published support lifecycle, including the individual components that constitute a larger product release –

https://support.microsoft.com/en-gb/gp/lifepolicy

With this in mind, despite the ending of support for App Fabric 1.1 as an individual product, Microsoft will continue to support the component through to the expected end of support lifecycle of SharePoint 2013 and SharePoint 2016.

Please be assured that this announcement has no bearing on the support of their existing or future deployments of SharePoint 2013 or SharePoint 2016 through to end of life of that product.

 

POST BY : Rajan Kapoor [MSFT]

Viewing all 170 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>