Jan
29
2012

Custom claim providers for SharePoint – lessons learned

In a number of the websites we have running on SharePoint 2010 we use custom claim providers. In this post I will describe the issues/challenges we had when creating and registering these providers.

Availability

The way to register a custom claim provider is to create a feature the does the job. You do this by creating a feature receiver and inheriting your receiver from SPClaimProviderFeatureReceiver as described by Steve Peschka. This needs to be a farm scoped feature. After activation, your custom claim provider is available in every web application, on every zone. This means that after a user logs in on one of your web sites, SharePoint notifies all registered claim providers and asks for claims for the user that is logging in. This applies to both an internal user on the internal (default) zone, or a internet user logging in to your website. In our farm we have websites for multiple labels, which means that the custom claim provider for website X of label A also kicks in for website Y, belonging to label B. In a later blogpost, Steve also decribes how to solve that. In the feature receiver, you need to set the IsUsedByDefault property of the SPClaimProviderDefinition to false:

public override void FeatureActivated(SPFeatureReceiverProperties properties)

{

    ExecuteBaseFeatureActivated(properties);

    SPClaimProviderManager cpm = SPClaimProviderManager.Local;

    foreach (SPClaimProviderDefinition cp in cpm.ClaimProviders)

    {

        if (cp.ClaimProviderType == typeof(YourCustomClaimProvider))

        {

            cp.IsUsedByDefault = false;

            cpm.Update();

            break;

        }

    }

}

We can now control exactly on what web application, on what zones our providers kick in.

LESSON 1: Before choosing the easy way out and making your claim providers available throughout the whole farm, think about if this is really what you want. Apart from the performance penalty you will get, you will probably find lots of errors in your log files that were generated by your claim providers running in places they were not built for.

Registration

After activating the farm feature using IsUsedByDefault, we still need to do this. We have chosen to do this using a custom PowerShell script. We run this script from our Project Installer (more about that later).

param($RedactieUrl, $ClaimProviderName)

$snapin = Get-PSSnapin | Where-Object {$_.Name -eq 'Microsoft.SharePoint.Powershell'}

if ($snapin -eq $null)

{

          Write-Host "Loading Microsoft SharePoint Powershell Snapin"

          Add-PSSnapin "Microsoft.SharePoint.Powershell"

}

 

function RegisterClaimProviderOnZone {

       param($WebApplication, $Zone, $ClaimProviderName)

       if ($WebApplication.IisSettings.ContainsKey($Zone))

       {

             $settings = $WebApplication.GetIisSettingsWithFallback($Zone)

             $providers = $settings.ClaimsProviders

             if( -not( $providers.Contains($ClaimProviderName))) {

                    $providers += $ClaimProviderName

                    Set-SPWebApplication -Identity $WebApplication `

                           -Zone $Zone `

                           -AdditionalClaimProvider $providers

                    Write-Host "Registered $ClaimProviderName on $($WebApplication.Url) in zone $Zone"

             }else {

                    Write-Host "$ClaimProviderName already registered on $($WebApplication.Url) in zone $Zone"

             }

       }

}

$WebApplication = Get-SPWebApplication $RedactieUrl

RegisterClaimProviderOnZone $WebApplication "Default" $ClaimProviderName

RegisterClaimProviderOnZone $WebApplication "Internet" $ClaimProviderName

It runs just after installing the WSP and activation of the farm feature. We run this script file supplying 2 parameters: the url of the web application and the name of the claim provider. In this case, the claim provider gets registered on both the default zone and the internet zone. We have different claim providers that just get registered on the internet zone. When we first started registering our providers, we had a different implementation of our registration script, and this has caused us some headaches. The main reason was our mis-interpretation of the name of the AdditionalClaimProvider of the Set-SPWebApplication cmdlet. We assumed we could pass a new claim provider and the command would add that new provider to the current collection. That is not the case! The parameter you pass IS the new collection.

This is the way NOT to do it:

$claimProvider = Get-SPClaimProvider $ClaimProviderName

Set-SPWebApplication -Identity $RedactieUrl `

             -Zone $Zone `

             -AdditionalClaimProvider $claimProvider

Everything worked fine until the point we needed to register the second claim provider in a web application. After running the installation of site B, site A suddenly stopped working. Took us some time to realize what happened and that our registration caused the issue. Glad we found this in our test-environment

LESSON 2: When you build and deploy custom claim providers, regression testing is very important, especially if you create providers that are available farm wide of multiple providers are registered in the same web application. Test if your registration works properly and if all providers still work as expected.

Hey, where did my claim providers go?

In our project installation, 95% of the work we need to do is scripted. But we all know the situations where something goes wrong, and you manually tweak some settings here and there. In this case we had to manually set the Custom Signin Page on the Edit Authentication page (Authentication Providers button in the ribbon). And by doing that, we lost the custom claim provider registrations on our web application zone( s ). Of course that happened in a pretty narrow installation window and suddenly our users did not get their claims. Oops. It took us some time to find we lost the registration of the provider and why we lost them.

Another reason why we have lost our providers, is the de-activation of the farm feature. Still hard to find out why the feature was de-activated, but a number of times, we needed to re-activate the feature.

We now have a custom script that makes it real easy to check (thanks Wouter!). If checks whether a specific provider is available in the farm and if it is registered for the Internet zone of a specific web app:

param(

                [string]$Url

)

$Provider = Get-SPClaimProvider | ? {$_.DisplayName -eq "Our custom Claims"} | Select -First 1

$HasProvider = $Provider -ne $null

Write-Host "Claim Provider exists: $HasProvider"

$WebApplication = Get-SPWebApplication $Url

$IisSettings = $WebApplication.GetIisSettingsWithFallback("Internet")

$HasRegisteredClaimProvider = $IisSettings.ClaimsProviders.Contains("OurCustomClaimProvider")

Write-Host "Our Claim Provider is registered: $HasRegisteredClaimProvider"

And if something is broken, there is a script that fixes it. It is the same as the script above, with an extra feature activation action to activate the farm feature.

LESSON 3: If you register your claims provider on a specific web application – zone, never touch the Edit Authentication page, or run your scripts afterwards to re-register the providers.

Context

A claim provider does have context. But only in a number of methods, where you get a context parameter. And be aware, this context is the web application, it is NOT the url of the site collection your user is using. Please be aware that methods without the context parameter can and will be called without any context. So there is no SPContext.Current and no HttpContext.Current. We wanted to read a setting for our claim provider that was site collection specific, but after many tries (and errors and unexpected behavior) we decided it was not the way to go. In some methods you will have a current context, but it is only in a limited number of methods, and you cannot be 100% sure of having that context.

LESSON 4: When designing your claim provider, first study the methods of SPClaimProvider and the context the claim provider gives you. Live with the fact the the only context you will get is the url of the web application, in some of the methods.

Debugging

LESSON 5: When you use custom claims, make sure you have a custom page or webpart in place that shows you the claims the current user has. This MSDN page has an example. You will need it for troubleshooting purposes. We have created a _LAYOUTS page that is deployed by our internal platform installation. That is easy, because this way the page is always available in all websites, and can be used to troubleshoot multiple different claim providers. And we don’t need administrative permissions to clutter the content with a webpart on a page.

Permanent link to this article: http://www.tonstegeman.com/blog/2012/01/custom-claim-providers-for-sharepoint-lessons-learned/

Nov
21
2011

Installing SharePoint–SQL Server

In my previous blog post I generally described how we deploy SharePoint within our organization. The first step is to install SQL Server. Because we install everything using unattended PowerShell scripts, we also install SQL Server from this script. This post will show how we do it.

Create settings file

As described in this post, we need a settings.xml file, describing what to install and where to find the binaries. This looks like this (My server is called STTO-SQL):

<?xml version="1.0" ?>

 

<SP2010Config>

 

  <Binaries>

 

    <SQLServerR2 displayname="Sql Server 2008 R2"

 

        location="E:\setup.exe" />

 

  </Binaries>

 

  <Topology>

 

    <!--DataBaseServer-->

 

    <Server Name="STTO-SQL" >

 

      <Install_SQL2008R2 />

 

    </Server>

 

  </Topology>

 

  <General>

 

    <PasswordFile>.\STTO-passwords.xml</PasswordFile>

 

  </General>

 

  <Domain>

 

    <DomainName>STTO</DomainName>

 

    <DNSDomainName>stto.local</DNSDomainName>

 

  </Domain>

 

  <SQL2008R2>

 

    <SQLEngineServiceAccount>STTO\svcSQL-SPDB</SQLEngineServiceAccount>

 

    <SQLSysAdminAccounts>Builtin\Administrators</SQLSysAdminAccounts>

 

    <INSTALLSHAREDDIR>C:\Program Files\Microsoft SQL Server</INSTALLSHAREDDIR>

 

    <INSTALLSHAREDWOWDIR>C:\Program Files (x86)\Microsoft SQL Server</INSTALLSHAREDWOWDIR>

 

    <INSTANCEDIR>C:\Program Files\Microsoft SQL Server</INSTANCEDIR>

 

    <SQLTEMPDBDIR>D:\SQLData</SQLTEMPDBDIR>

 

    <SQLTEMPDBLOGDIR>D:\SQLTransactionLog</SQLTEMPDBLOGDIR>

 

    <SQLUSERDBDIR>D:\SQLData</SQLUSERDBDIR>

 

    <SQLUSERDBLOGDIR>D:\SQLTransactionLog</SQLUSERDBLOGDIR>

 

  </SQL2008R2>

 

</SP2010Config>

 

The settings file tells the installer to install SQL Server 2008R2, where to find the password file and it has a configuration section (SQL2008R2) for SQL setup settings. The ZIP file contains the password file in the same folder. It is a better idea to store that in a network share where just the installer account has the permissions to get to it.

Create SQL Configuration file

Next thing to prepare is the configuration file for the unattended SQL installation. This article shows you how to do that. The easiest way is to used the SQL setup wizard up to the “Ready to Install” step and taking the ini file the setup program created. If you are installing R2, please note this before you start the installation.

The config file in the zip file contains this sample configuration:

[SQLSERVER2008]

 

IACCEPTSQLSERVERLICENSETERMS="True"

 

INSTANCEID="MSSQLSERVER"

 

ACTION="Install"

 

FEATURES=SQLENGINE,BIDS,SSMS,ADV_SSMS

 

HELP="False"

 

INDICATEPROGRESS="True"

 

QUIET="False"

 

QUIETSIMPLE="True"

 

X86="False"

 

ENU="True"

 

ERRORREPORTING="False"

 

INSTALLSHAREDDIR=

 

INSTALLSHAREDWOWDIR=

 

INSTANCEDIR=

 

SQMREPORTING="False"

 

INSTANCENAME="MSSQLSERVER"

 

AGTSVCACCOUNT=

 

AGTSVCPASSWORD=

 

AGTSVCSTARTUPTYPE="Automatic"

 

ISSVCSTARTUPTYPE="Automatic"

 

ISSVCACCOUNT="NT AUTHORITY\NetworkService"

 

ASSVCSTARTUPTYPE="Automatic"

 

ASCOLLATION="Latin1_General_CI_AS"

 

ASDATADIR="Data"

 

ASLOGDIR="Log"

 

ASBACKUPDIR="Backup"

 

ASTEMPDIR="Temp"

 

ASCONFIGDIR="Config"

 

ASPROVIDERMSOLAP="1"

 

FARMADMINPORT="0"

 

SQLSVCSTARTUPTYPE="Automatic"

 

FILESTREAMLEVEL="0"

 

ENABLERANU="0"

 

SQLCOLLATION="Latin1_General_CI_AS"

 

SQLSVCACCOUNT=

 

SQLSVCPASSWORD=

 

SQLSYSADMINACCOUNTS=

 

TCPENABLED="1"

 

NPENABLED="1"

 

BROWSERSVCSTARTUPTYPE="Disabled"

 

RSSVCACCOUNT=

 

RSSVCPASSWORD=

 

RSSVCSTARTUPTYPE="Automatic"

 

RSINSTALLMODE="DefaultSharePointMode"

 

SQLTEMPDBDIR=

 

SQLTEMPDBLOGDIR=

 

SQLUSERDBDIR=

 

SQLUSERDBLOGDIR=

 

You can find this file in the ZIP file in the .\scripts\DB folder. It is called ‘SQL2008R2_Unattended.ini’. A number of settings are not specified in this file, these are replaced while installing by the values provided in the settings file. The ini file provided is used as a template file. The installer adds the values from the settings file and saves it to a copy of this ini file. That copy is the file that will be used by the SQL installer. The advantage of doing this is that we now have 1 central XML where we can control the settings of our SQL installation.

And we also  don’t need to put password in the ini file. The installer does that for us.

Run the installer

After logging on to the server and starting a PowerShell window (using Run as Administrator!) we can start the script by starting Install.ps1. This script file calls 2 other powershell files from the .scripts\DB folder:

$CurrentFolder = Get-Location

 

#Clear all previous errors

 

$Error.Clear()

 


 

## Run all scripts intended for database servers in the farm

 

& $CurrentFolder\Scripts\DB\1.PrePareSql2k8R2Unattended.ps1

 

& $CurrentFolder\Scripts\DB\2.InstallSQL-2008R2.ps1

 

## SQL Setup has failed.

 

if($LastExitCode -eq 123)

 

{

 

    Exit

 

}

 

Write-Host "Finished"

 

Both ps1 files first load the Utils.ps1 file from the support folder. This utils file contains some general functions (like getting a password from the password file, logging, error handling, etc.) It also gets all general settings from the Settings.xml file.

Preparing the ini file

The first PS1 file prepares the ini file required by the SQL installation process. It takes the template and adds to the configuration from the settings.xml file to that ini file.

#Include library from $RootScriptFolder\Support folder

 

$CurrentFolder = Split-Path $myInvocation.MyCommand.Definition -Parent

 

$ParentFolder = Split-Path $CurrentFolder -Parent

 

$RootScriptFolder = Split-Path $ParentFolder -Parent

 

. $RootScriptFolder\Support\Utils.ps1

 


 

Function ReplaceInUnattendedFile

 

{    param ($Find, $Replace)

 

    $Path = Resolve-Path $UnattendedFile

 

    $Text = [String]::join([Environment]::newline, (Get-Content -Path $Path))

 

    $NewText = $Text.Replace($Find, $Replace)

 

    Set-Content $Path -value $NewText

 

}

 


 

Function CreateDir

 

{

 

    param ($Path)

 


 

    try

 

    {

 

        If(-not (Test-Path $Path))

 

        {

 

            LogMessage "Creating $Path"

 

            New-Item -path $Path -type Directory

 

        }

 

    }

 

    catch

 

    {

 

        LogError "Error creating directory $Path"

 

    }

 

}

 


 

Function PrepareSQL2008R2UnattendedInstall

 

{

 

    if  (-not ((Test-Path "$env:ProgramFiles\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Binn\sqlagent.exe") -or `

 

        (Test-Path "$env:ProgramFiles\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Binn\sqlagent.exe") -or `

 

        (Test-Path "$env:ProgramFiles(x86)\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Binn\sqlagent.exe") -or `

 

        (Test-Path "$env:ProgramFiles(x86)\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Binn\sqlagent.exe") ))

 

    {

 

        ## Read required parameters from settings

 

        $SQLSource = [System.IO.Path]::GetDirectoryName($Binaries.SQLServerR2.location)

 

        $SQLEngineServiceAccount = $SQL2008R2.SQLEngineServiceAccount

 

        $SQLEngineServiceAccountPassword = GetPassword $SQL2008R2.SQLEngineServiceAccount

 

        $SQLSysAdminAccounts = $SQL2008R2.SQLSysAdminAccounts

 

        $INSTALLSHAREDDIR = $SQL2008R2.INSTALLSHAREDDIR

 

        $INSTALLSHAREDWOWDIR = $SQL2008R2.INSTALLSHAREDWOWDIR

 

        $INSTANCEDIR = $SQL2008R2.INSTANCEDIR

 

        $SQLTEMPDBDIR = $SQL2008R2.SQLTEMPDBDIR

 

        $SQLTEMPDBLOGDIR = $SQL2008R2.SQLTEMPDBLOGDIR

 

        $SQLUSERDBDIR = $SQL2008R2.SQLUSERDBDIR

 

        $SQLUSERDBLOGDIR = $SQL2008R2.SQLUSERDBLOGDIR

 


 

        CreateDir $INSTALLSHAREDDIR

 

        CreateDir $INSTALLSHAREDWOWDIR

 

        CreateDir $INSTANCEDIR

 

        CreateDir $SQLTEMPDBDIR

 

        CreateDir $SQLTEMPDBLOGDIR

 

        CreateDir $SQLUSERDBDIR

 

        CreateDir $SQLUSERDBLOGDIR

 


 

        $UnattendedFile = "$RootScriptFolder\Scripts\DB\sql-2008R2.ini"

 

        $UnattendedSourceFile = "$RootScriptFolder\Scripts\DB\SQL2008R2_Unattended.ini"

 


 

        ## Overwrite unattended file of previous runs

 

        LogMessage "- Preparing unattend SQL setup file"

 

        Copy-Item $UnattendedSourceFile $UnattendedFile -Force

 

        Get-Item $UnattendedFile | % { $_.IsReadOnly = $false }

 

        ## Store specified variables in the unattended file

 


 

        ReplaceInUnattendedFile "SQLSVCACCOUNT=" "SQLSVCACCOUNT=`"$SQLEngineServiceAccount`""

 

        ReplaceInUnattendedFile "SQLSVCPASSWORD=" "SQLSVCPASSWORD=`"$SQLEngineServiceAccountPassword`""

 

        ReplaceInUnattendedFile "AGTSVCACCOUNT=" "AGTSVCACCOUNT=`"$SQLEngineServiceAccount`""

 

        ReplaceInUnattendedFile "AGTSVCPASSWORD=" "AGTSVCPASSWORD=`"$SQLEngineServiceAccountPassword`""

 

        ReplaceInUnattendedFile "RSSVCACCOUNT=" "RSSVCACCOUNT=`"$SQLEngineServiceAccount`""

 

        ReplaceInUnattendedFile "RSSVCPASSWORD=" "RSSVCPASSWORD=`"$SQLEngineServiceAccountPassword`""

 

        ReplaceInUnattendedFile "SQLSYSADMINACCOUNTS=" "SQLSYSADMINACCOUNTS=`"$SQLSysAdminAccounts`""

 

        ReplaceInUnattendedFile "INSTALLSHAREDDIR=" "INSTALLSHAREDDIR=`"$INSTALLSHAREDDIR`""

 

        ReplaceInUnattendedFile "INSTALLSHAREDWOWDIR=" "INSTALLSHAREDWOWDIR=`"$INSTALLSHAREDWOWDIR`""

 

        ReplaceInUnattendedFile "INSTANCEDIR=" "INSTANCEDIR=`"$INSTANCEDIR`""

 

        ReplaceInUnattendedFile "SQLTEMPDBDIR=" "SQLTEMPDBDIR=`"$SQLTEMPDBDIR`""

 

        ReplaceInUnattendedFile "SQLTEMPDBLOGDIR=" "SQLTEMPDBLOGDIR=`"$SQLTEMPDBLOGDIR`""

 

        ReplaceInUnattendedFile "SQLUSERDBDIR=" "SQLUSERDBDIR=`"$SQLUSERDBDIR`""

 

        ReplaceInUnattendedFile "SQLUSERDBLOGDIR=" "SQLUSERDBLOGDIR=`"$SQLUSERDBLOGDIR`""

 

    }

 

}

 


 

if ((NeedsInstall "Install_SQL2008R2") -eq $true) {    PrepareSQL2008R2UnattendedInstall }

 

The configuration file is now saved as sql-2008R2.ini an it is ready to be used by the SQL installer.

Installing SQL Server

The second PS1 file, InstallSQL-2008R2.ps1 installs SQL Server. But only if SQL is needed on the server we are currently running the installer on. Remember this is (going to be) a general installer that we run on all servers in the farm. It checks the settings.xml file to see if it can find the current server name in the Topology section.

 

<Server Name="STTO-SQL" >

 

  <Install_SQL2008R2 />

 

</Server>

 

If so, it checks (using the function NeedsInstall at the bottom of the ps1 file) if the current server needs to have SQL installed. If so, it will install the database server.

#Include library from $RootScriptFolder\Support folder

 

$CurrentFolder = Split-Path $myInvocation.MyCommand.Definition -Parent

 

$ParentFolder = Split-Path $CurrentFolder -Parent

 

$RootScriptFolder = Split-Path $ParentFolder -Parent

 

. $RootScriptFolder\Support\Utils.ps1

 


 

$BinaryName = $Binaries.SQLServerR2.displayname

 

$BinaryLocation = $Binaries.SQLServerR2.location

 


 

$UnattendedFile = "$RootScriptFolder\Scripts\DB\sql-2008R2.ini"

 


 

Function InstallSQL2008R2

 

{

 

    if  (-not ((Test-Path "$env:ProgramFiles\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Binn\sqlagent.exe") -or `

 

        (Test-Path "$env:ProgramFiles\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Binn\sqlagent.exe") -or `

 

        (Test-Path "$env:ProgramFiles(x86)\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Binn\sqlagent.exe") -or `

 

        (Test-Path "$env:ProgramFiles(x86)\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Binn\sqlagent.exe") ))

 

    {

 

        if (Test-Path $BinaryLocation)

 

        {

 

            LogMessage "- Installing '$BinaryName'"

 


 

            $currentlocation = Get-Location

 

            LogMessage $currentLocation

 

            $newlocation = [System.IO.Path]::GetDirectoryName($Binaries.SQLServerR2.location)

 

            Set-Location $newlocation

 


 

            Start-Process -FilePath "$BinaryLocation" -ArgumentList "/CONFIGURATIONFILE=`"$UnattendedFile`" /INDICATEPROGRESS /QS" -Wait

 


 

            If ((Get-Service | ? { $_.Name -eq "MSSQLSERVER" }) -eq $Null )

 

            {

 

                Set-Location $currentlocation

 

                LogError "Failed to install SQL Server, check logfiles at %programfiles%\Microsoft SQL Server\100\Setup Bootstrap\Log"

 

                Exit 123

 

            }

 


 

            Set-Location $currentlocation

 

            LogMessage "- Installation of '$BinaryName' finished"

 

        }

 

        else

 

        {

 

            LogError "- Install path $BinaryLocation not found"

 

        }

 

    }

 

}

 


 

if ((NeedsInstall "Install_SQL2008R2") -eq $true)

 

{

 

    InstallSQL2008R2

 

}

 

At this point our database server is ready. Next setup is to install SharePoint to the application server in our farm. That will be the subject of the next blogpost!

The scripts above are also available in this ZIP file.

Permanent link to this article: http://www.tonstegeman.com/blog/2011/11/installing-sharepointsql-server/

Nov
16
2011

Organizing SharePoint projects–Installation

In a previous post I have described our SharePoint 2010 DTAP street. We have quite a number of environments, so it is pretty important to make installation of SharePoint 2010 easy. When we started with SharePoint 2010 in May 2010, the first thing we did was to create a set of installation scripts to install every single farm in our environment. Donald Hessing gave us a kick start and Niels Loup and Ferdi Meijer added scripting to that so that we could:

  • Install SharePoint unattended to single- and multi server topologies
  • Run an automated install of all prerequisites (including SQL Server)
  • Install tooling used by developers (also unattended)
  • Have nice database names for SharePoint databases
  • Have all dev and test machines in the same setup as production
    • same least privileged setup for accounts
    • same multi-tenant setup
    • same web applications
  • Cleanup and rebuild development machines easily
  • Have repeatable deployments

Our OTAP Installer (which is Dutch for DTAP) starts with a fresh installed Windows 2008 R2 server (or a set of) and when it’s done, we have SharePoint 2010 deployed in a multi tenant setup, with a number of tenants configured and all web applications configured, including a dummy site collection. It is up to the Project Installer (more on that in a later blogpost) to install and configure SharePoint for specific projects.
A lot has been written about SharePoint installation using Powershell, and AutoSPInstaller is available so I won’t post all scripts here (unless you ask me to Smile ), but I will show some bits and pieces.

Preparations

Here is what the installer needs as input before we start it:

  • a clean set of Windows servers
  • the installer account has administrative permissions on all servers
  • a settings.xml file that contains the topology for the servers, the services that run on each server and all settings that are used throughout the installation process. for more information, see below
  • a sites.xml file that contains settings for the web applications to create (more on this in a future post)
  • all accounts available in the AD. We use a specific set of accounts for each environment. All accounts have a fixed prefix, followed by an environment specific suffix. The farm account for our development servers for example, is called sp-farm-d and for acceptance it is called sp-farm-a.
  • a password.xml file that contains all passwords for the accounts that we us in the setup. In our environment this password file is locked somewhere on a file share and the installer account is the only account having read access to it. It looks like this:
    <?xml version="1.0" encoding="utf-8" ?>

    <Accounts>

      <Account Name="PGGM-INTRA\sp-Farm-d"><![CDATA[password=r+)L;}=0L+5C'<O]]></Account>

      ....

      <Account Name="PGGM-INTRA\svcSQL-SPDBServer"><![CDATA[password=J8r2)881STc2mL1]]></Account>

    </Accounts>

  • access to a file share with all binaries

The scripts are written in a way that they first check if the actions haven’t already been done before. If so, they skip the action. This way we can always restart the script on every server when something went wrong, or we need to re-configure the server/farm.

Settings file

The OTAP Installer uses a XML file that has all the settings for the installation. It contains a number of elements:

  • Binaries– Contains the references to all binaries we need throughout the installation. It looks like this:
    <Binaries>

      <SQLServerR2 displayname="Sql Server 2008 R2" location="D:\setup.exe" />

    
      <SQLManagementObjects displayname="Microsoft SQL Server 2008 Management Objects" 
    
        location="Z:\OTAP2010Binaries\SQL2008ManagementObjects\SharedManagementObjects.msi" />
    

      <SharePointHotfix_NetFramework35 displayname=".Net FrameWork 3.5 Hotfix" 

        location="Z:\OTAP2010Binaries\HotFixes\Windows6.1-KB976462-v2-x64.msu" />

      <SharePoint displayname="SharePoint 2010 RTM" 

        location="Z:\OTAP2010Binaries\sp2010RTM\setup.exe" />

      <SharePointLPDutch displayname="SharePoint Server 2010 Dutch Language Pack" 

        location="Z:\OTAP2010Binaries\sp2010lp_dutch\setup.exe" />

      <SharePointCU201108 displayname="SharePoint 2010 Augustus 2011 CU"

        location="E:\Install\OTAP2010Binaries\SP2010_CU201108\office2010-kb2553048-fullfile-x64-glb.exe" />

      <PowerGui displayname="PowerGUI.2.3.0.1503" 

        location="\\XXX\OTAP2010Binaries\PowerGUI\PowerGUI.2.3.0.1503.msi" />

    </Binaries>

  • Topology– This element contains all servers in the farm and for every server, the instructions for installation and configuration. The actions for server ‘localhost’ are run on every server in the farm. A brief example:
    <Topology>

      <Server Name="localhost">

        <Disable_ConfigTasksScreenAtLogon />

        <Install_DotNetFramework />

      </Server>

      <!--DataBaseServer-->

      <Server Name="STTO-SSQL" >

        <Install_SQL2008R2 />

      </Server>

      <!--Web Front-End-->

      <Server Name="STTO-SPWFE" >

        <Create_SQLAliases />

        <InstallSQLManagementObjects />

        <Install_SharePoint />

        <JoinFarm />

        <Configure_ServiceApp_EnterpriseSearch >

          <QueryComponent />

        </Configure_ServiceApp_EnterpriseSearch>

      </Server>

      <!--ApplicationServer-->

      <Server Name="STTO-SPAPP" >

        <Create_SQLAliases />

        <InstallSQLManagementObjects />

        <Install_SharePoint />

        <CreateFarm />

        <Create_WebApp_CentralAdmin />

        <Configure_ServiceApp_ManagedMetadata />

       ....

        <Create_WebApplications_And_Sites />

        <Create_Subscriptions />

      </Server>

    </Topology>

    In a normal settings.xml file, the application server would typically contains more service applications to configure. I have omitted them here to keep this post short.

  • General– General settings like the reference to the password file
  • Domain– Domain name. For my demo domain:
    <Domain>

      <DomainName>STTO</DomainName>

      <DNSDomainName>stto.local</DNSDomainName>

    </Domain>

  • SQLAliases– One or more SQL Aliases to create on the server. For my demo setup it looks like this:
    <SQLAliases>

      <SQLAlias Name="SPDBServer" Value="STTO-SQL" />

    </SQLAliases>

  • SQL2008R2 – Settings for configuring SQL Server. I will cover this bit in more detail in the next blogpost. My demo configuration:
    <SQL2008R2>

      <SQLEngineServiceAccount>PGGM-INTRA\svcSQL-SPDBServer</SQLEngineServiceAccount>

      <SQLSysAdminAccounts>Builtin\Administrators</SQLSysAdminAccounts>

      <INSTALLSHAREDDIR>C:\Program Files\Microsoft SQL Server</INSTALLSHAREDDIR>

      <INSTALLSHAREDWOWDIR>C:\Program Files (x86)\Microsoft SQL Server</INSTALLSHAREDWOWDIR>

      <INSTANCEDIR>C:\Program Files\Microsoft SQL Server</INSTANCEDIR>

      <SQLTEMPDBDIR>C:\SQLData</SQLTEMPDBDIR>

      <SQLTEMPDBLOGDIR>C:\SQLTransactionLog</SQLTEMPDBLOGDIR>

      <SQLUSERDBDIR>C:\SQLData</SQLUSERDBDIR>

      <SQLUSERDBLOGDIR>C:\SQLTransactionLog</SQLUSERDBLOGDIR>

    </SQL2008R2>

  • IIS– Element containing IIS settings, the most important being the folder where to put the configuration files (the wwwroot folder):
    <IIS>

      <Settings wwwroot="E:\STTOWEB" />

      <DisableLoopbackCheck value="true" />

    </IIS>

  • Farm– XML element with the options to configure the farm. Important settings here are name and SQL settings for the Configuration database and the farm account.
    <Farm

    DefaultDBServer="SPDBServer"

    DiagnosticLogLocation="E:\LogFiles\Diagnostic"

    Passphrase=STTO_$ecretP@$$fra$# 

     

    count="STTO\sp-farm-d"

    DebugScriptMode="False"

    OutgoingEmailServer="localhost"

    DefaultFromAddress="someone@stto.local"

    DefaultReplyToAddress="someone@stto.local"

    DefaultCodePage="65001" >

      <!--DebugScriptMode deletes existing datebases-->

      <DisableUnneededServices value="true" />

      <ConfigurationDatabase Name="Config" Recovery="Simple" InitDataSizeInMB="125"

               GrowthDataSizeInMB="50" InitLogSizeInMB="50" GrowthLogSizeInMB="50" />

    </Farm>

  • WebApplications– The WebApplications element lets you configure the Central Admin web application that you want to have in your farm. After creating the database and the web application, the script extends the central administration web application, to have a custom Url:
    <WebApplications>

      <CentralAdmin Name="SharePoint Central Admininistration v4"

               Port="51000">

        <ExtendWebApp Name="SharePoint Central Admininistration v4 - extended"

               Path="E:\STTOWEB\centraladmin_extended"

               Zone="Intranet"

               Url="http://centraladmin-d"

               Hostheader="centraladmin-d"

               Port="80" />

        <ApplicationPool Name="SharePoint Central Admininistration AppPool - D"

               Account="PGGM-INTRA\sp-farm-d" />

        <Database DbServer="Default"

               Recovery="Simple"

               Name="Content_CentralAdmin_D"

               InitDataSizeInMB="200"

               GrowthDataSizeInMB="50"

               InitLogSizeInMB="100"

               GrowthLogSizeInMB="50" />

      </CentralAdmin>

    </WebApplications>

  • Services – This is the biggest element in the configuration file. It contains the configuration of all service applications that we will use in our farm. Please note that this element contains the configuration for the service application. The Topologyelement decides on which server the service app will run. For the purpose of this blog post, it only shows the example configuration of the MMS (Managed Metadata Service) service application:

    This MSS service application is created as a multi tenant service application.

    <Services>

      <ManagedMetadataService Name="Managed Metadata Service"

              MultiTenant="True"

              ContentTypePushdownEnabled="True"

              DefaultKeywordTaxonomy="True"

              DefaultSiteCollectionTaxonomy="True" >

        <Application Name="Managed Metadata Service App - D" >

          <ApplicationPool Name="Managed Metadata Service App - D"

                  Account="PGGM-INTRA\sp-MMS-d" />

          <Database DbServer="Default"

                  Recovery="Simple"

                  Name="ServiceApp_ManagedMetadata_B"

                  InitDataSizeInMB="50"

                  GrowthDataSizeInMB="50"

                  InitLogSizeInMB="10"

                  GrowthLogSizeInMB="25" />

        </Application>

      </ManagedMetadataService>

    </Services>

running the scripts

After all preparations are done, it is time to start the installation. To do this an administrator logs into a server using the installer account (in my example this is sp-install-d), starts a PowerShell command prompt (using Run as Administrator) and starts the installer. The installer loads settings.xml and reads that file to find the instruction for the current server. The order of installation:

  • Database server
  • Application server / servers
  • Web Front end server / servers

Finally

This blogpost is a general introduction to our OTAP installer. It gives you a basic idea how the installer works and how to configure it. In the next few posts I will cover some aspects of the installation process in more detail. If you are interested in specific part, please contact me and I will handle those parts first.

Permanent link to this article: http://www.tonstegeman.com/blog/2011/11/organizing-sharepoint-projectsinstallation/

Oct
31
2011

Unattended install of SQL2008R2 and using SQLSYSADMINACCOUNTS

The other day I was preparing an ini file for an unattended installation of SQL Server 2008 R2. The easiest way to do that is using the setup wizard until you get to the “Ready to Install” page. This article shows you how to do it.

The problem I had with the ConfigurationFile.ini that the wizard created for me is that it included both these options:

  • SQLSYSADMINACCOUNTS
  • ADDCURRENTUSERASSQLADMIN

I set the first option to BUILTIN\Administrators to give local administrators the correct permissions and I set the second option to false. But when I first ran the install unattended, SQL Server was installed properly, but the sys admin group was getting access to SQL. This TechNet article states that the SQLSYSADMINACCOUNTS is a required setting, because I wasn’t installing the Express edition. It turns out you have to remove the ADDCURRENTUSERASSQLADMIN option from the generated ini file. This is a setting for Express editions and cannot be used when SQLSYSADMINACCOUNTS is used. Took me some time to find the solution in this CodeProject article.

Permanent link to this article: http://www.tonstegeman.com/blog/2011/10/unattended-install-of-sql2008r2-and-using-sqlsysadminaccounts/

Oct
17
2011

Organizing SharePoint projects – Our DTAP street

With a number of SharePoint 2010 projects running in production and a few projects that are currently in their second stage, I thought it was about time to write something about how we organized these projects. In this post I will describe how we applied DTAP in our projects. It took us a year and half and a few revisions to get where we are now. In this post I will describe which environments we have and how we use them. And here is the disclaimer: it is a setup that works pretty good in our organization. It might work in your organization, but it also can be complete overkill. It depends. Most important is to think about this before you build your street. And in this post I hope to give you some insights in our situation that might help you thinking about yours.

Thinking about how many environments and how to use them is important, because it is good for everyone to have a clear understanding of the purpose of each environment. Also make sure everybody understands the processes around each environment. Who’s doing deployments, who to call when it is down, where to find the latest build, things like that.

We started working with SharePoint 2010, doing an internal proof of concept project around knowledge management. We had good discussions on how to organize things in Visual Studio, how to deploy our software and how to manage versions. A year and a half later things have grown a bit and we needed to re-think our strategies to make the growth possible.

Overview

I start off with giving you a schematic overview of our street. A few things to note:

  • Besides intranet projects we also started to run internet facing sites. We have a separate SharePoint production farm for those websites. And this also led to 2 acceptance farms, 2 test farms etc
  • Multi server farms are showed as 3 servers. This is not the real topology of these farms. The real topology is out of scope for the purpose of this post. Just the fact that a farm is built using multiple servers is important.
  • Every server is virtualized.

DTAP

In the following paragraphs I will talk about each environment, starting with the Development and moving forward to Production. For every environment I will discuss the following subjects:

  • Usage – The purpose of the environment. Who is using it, and what for.
  • Topology– Single server or multi server.
  • Installation and management– Who is installing the servers and who is responsible for maintaining them. How do we install SharePoint and tooling.
  • Deployment – How are projects deployed to the environment.

Development (D)

image Usage – Used by developers and architects to build software and technical proof of concepts. These servers are also used by (functional) designers as demo environments to talk to users about SharePoint and build functional proof of concepts.
Most developers have a dedicated server. For the other roles, we share servers. In SharePoint – BI projects multiple project members share a development server to build and test reports and scorecards.
  • Topology– All development machines are standalone farms. Our development machines are not dedicated for one of the farms. To be flexible when people change teams, new projects start and other projects end, we want to be able to switch our development servers from intra- to internet and vice versa.
  • Installation and management– Initial Installation of the OS and SharePoint is done by our IT-Pro team. We have a set of PowerShell scripts that install SharePoint and prerequisites and all tooling that we need. Day to day management is done by the developers themselves. They are responsible for installing service packs etc. They also have a script to change their dev server from an intranet to an internet development server. Main difference are the service applications that are used and the service accounts, that are dedicated for the type of farm. Initially developers get a fully functional machine, with the SharePoint farm up and running. Creating web applications, site collections, etc. is the first thing they do in their projects.
  • Deployment – Software and SharePoint configuration we build in our projects are deployed to the local farm through Visual Studio deployment and through our Project Installer. This is a set of PowerShell scripts that installs WSP’s, configures SharePoint and adds content. It probably is the subject of a future post.
    For the developers there is 1 simple ps1 file to kick off. This cleans up the machine and re-installs everything that is created by the whole team. This way front-end developers who don’t know anything about SharePoint are also able to do and test their work in a SharePoint environment.
    In projects that are in their second stage, we still use this procedure to install the project to dev machines. Developers are also responsible to build the upgrade path to their latest version, but this is not tested in the development farm. In the setup we currently have it is not very easy to go back to a specific, to test the upgrade path over and over again.

Test – Development (T-D)

image Usage – T-D environments are test farms dedicated to a project team. Used by testers in that team to test the software and SharePoint configuration. Main reason to have a dedicated farm is that testers are no longer dependent on other projects that run in the same farm. They control everything themselves and decide on their own deployment schedule to this internal test server. Larger project teams have a dedicated T-D environment. Small projects use the B-D farm for this purpose. If projects have testers that build automatic tests, this typically is the environment they use to run the tests.
  • Topology– All T-D machines are standalone farms. They are the same machines as development machines. This way we are flexible when setting up new teams. T-D machines can become D machines and vice versa. After a project team stops, D and T-D machines are recycled for new projects, either in the intranet or the internet farm.
  • Installation and management– Same story as for development machines.
  • Deployment – Our Project Installer is used to install software to SharePoint and configure the environment for testers to be used. Generally one of the developers in the team has the task to install a new version to this environment. Most teams do this every day. The developer gets the latest code from Team Foundation Server, builds and packages everything and runs a re-install on his own machine. This is to validate everything that is checked in the previous day is installable. If this leads to a working site, he/she runs the same re-install (1 big PowerShell script) on the T-D server. If something is wrong, the team has time to fix it and testers still have yesterday’s build to continue testing. This way testers always have a working test environment, with just a short break when the installer runs. We started off by always installing the daily build, but this turned out not to be a good idea. Having 3 testers asking every 10 minutes when they can start testing is not very good for your blood pressure Smile.

Build – Development (B-D)

SNAGHTML19a4033 Usage – B-D environments are the build servers dedicated to a farm. We have 2 build servers, one for intranet projects and one for internet projects. We use these environments for 5 purposes:
1) Daily validation if all code builds (using TFS TeamBuild) and can be deployed.
2) Testing if everything works in a multiserver environment, e.g. we catch a lot of “it works on my machine” in this environment. And because of the daily build, we find it the next day after it is checked in, instead of weeks later in the test environment.
3) Integration testing – this is the first environment where all projects get together. We use this to test if everything keeps running after installing the daily update of every project. This currently is a manual process. In the near future we hope to use the automated tests built by the projects teams for this.
4) Internal testing by testers in smaller projects that do not have a dedicated T-D server. We first started off by using this server as the test environment for all projects. This caused too many missed testing hours, because one of the projects was still fixing their daily build. Projects were simply too dependent on each other. We fixed it by introducing T-D servers.
5) Validation by IT-Pro’s if projects that are delivered to T are properly installed and tested by the development teams.
  • Topology– Both B-D farms are multi-server, with a single web frontend.
  • Installation and management– Same story as for development machines. Our development department is also responsible for managing this environment, just like the development servers. The challenge here is that multiple project teams are dependent on this farm. They need to work together to ensure test environments are available when needed and coordinate fixing of their daily builds.
  • Deployment – Our Project Installer is used to install software to SharePoint and configure the environment. This is done by kicking off the PowerShell script for the project installer. This is done using the Windows Task scheduler on the application server, after the TFS teambuilds for all projects are finished. We are currently using TFS2010 with the MSBuild based teambuild. We haven’t yet upgraded the teambuild processes to the new workflow based teambuild. One day we will move to the approach as described by Mike Morton and Chris O’Brien.

Test (T)

image Usage – For our IT-Pro’s everything we have talked about until now it called Development. For them, the test environment is where it all starts. The environments are used for:
1) Admins use this environment to learn project specific installations.
2) Project teams use it to fine tune the installations before they get to production (we all sometimes miss some config changes we made at the project start and forgot to document….).
3) Admins have a test form that they go through after every deployment. They do basic tests to ensure the environment stays up and running if this goes to production. They check event logs, health analyzer, etc.
4) Project teams use this environment for demos and for testing by business users. Generally all projects are installed a few times to T before they go to A.
5) Test connections to back-end systems. Not every installation to T needs to go through to production. We can install versions on T to validate connections to back office systems.
  • Topology– Both T farms are multi-server.
  • Installation and management– SharePoint and everything it needs is installed to the servers by our scripts. Of course without the development tooling. Developers do not have access to this server. They just have read permissions to the diagnostic logging folder. Everything else is done by admins.
  • Deployment – Our Project Installer is used to install software to SharePoint and configure the environment. This is done by an IT Pro. Installation is done based on release notes. When projects deliver for deployment to T, they also deliver this document. It describes all required parameters, prerequisites, how to run the Project Installer and, if needed, manual configuration steps. Projects themselves decide their deployment schedule to the test environment. Some do it every (scrum) sprint, some do it weekly and some do it after a number of sprints. It is the scrum master’s task to ensure a time slot and admin are available to do the installation.
    We have a special mode for our Project Installer; it does not just do every action, but asks whether or not to run the action. For admins this makes it easy to see what happens and find errors easier. Installations in T are done with the help of one of the developers.

Acceptance (A)

image Usage – The acceptance servers are used by our IT-Pro’s as final validation before we go to production. For the A farms, deployments need to be without problems. It is the final repetition before P-installation. We try not to do functional testing in this environment. After installing to the A environment we ask the product owner, to approve the application (or hotfix) for installation to the production environment. If something goes wrong during installation, or the product owner decides, some changes need to made, we need to go back to the project team and go from B-D to T, back to A. In the ideal world there’s 1 A installation for every P installation.
  • Topology– Both A farms are multi-server.
  • Installation and management– SharePoint is installed to the servers by the samen PowerShell scripts, but with different parameter sets.
  • Deployment – A deployments are done by admins, based on the release notes.

Production (P)

image Usage – Not a lot to tell here. It’s what it’s all about. We do all of the above to get everything we create to these farms.
  • Topology– Both production farms are multi-server.
  • Installation and management– SharePoint is installed to the servers by the same PowerShell scripts, but with different parameter sets. The big advantage of these PowerShell scripts, is that it is way easier to ensure all environments in our streets have the same setup. So if we have a multi tenant farm, all our development machines are also running the same multi tenant setup. And if we need Performance Point, the development machines also have the PPS service application running. From a quality perspective this is pretty important.
  • Deployment – Production deployments are done by admins, based on the release notes. By the time we get here, the deployments have been tested a number of times, so they should run without problems. Our goal is to automate as much as possible by using and continuously improving our Project Installer. This way there is very little manual configuration left by the time we get to the P-installation. This greatly improves the quality of our installations.

Permanent link to this article: http://www.tonstegeman.com/blog/2011/10/organizing-sharepoint-projects-our-dtap-street/

Sep
13
2011

Security Blueprints – introduction

SharePoint 2007 contains a lot of options for security configuration. In larger site collections, it is very easy to loose the overview of how the security in your site collections is configured. Publishing parts of the content in your site collections for anonymous users, makes this even harder. And if you grant your power users the permissions to manage the security settings themselves, it suddenly is impossible to keep an overview. If you have done support for a SharePoint environment and have tried to solve security issues, you probably are familiar with this problem. Security blueprints can help you in this scenario. These blueprints are a report of all security related settings in your sites. In the first version of the product, these reports are published as XML files. By creating your own XSLT stylesheets, you can use this product to create your own reports on the security setup of your SharePoint sites.

A number of sample support questions that can be addressed easier using Security Blueprints:

  • I am getting all these request e-mails asking me to grant users permissions. How do I find all places where my e-mail address is configured to be the contact for access requests?
  • The Table of Contents web part does not take permissions into account. We have a number of sub sites with unique permissions, but my users see all subsites. In my other site collection, this works as expected. Using a security blueprint, you can easily check the permissions on these sites and find out users see the subsites in the navigation, because anonymous access is turned on for these sites.
  • I have granted my project managers our custom permission level that allows them to add people to specific SharePoint groups, but it does not work. They cannot add users. In another site collection, this works as described. By comparing security blue prints from the 2 site collections, you can quickly find out the custom permission level misses one of the critical permissions.

The most important settings that are (currently) included in the report:

  • SharePoint sitegroups and their permissions
  • Permission levels
  • Lists / Document libraries and their security settings
  • Anonymous settings
  • Request Access settings
  • Activated Site Features
  • Activated Site Collection Features
  • Site Collection Administrators

Security blueprints are generated manually by a site administrator, or on a scheduled basis by the Security Blueprints timerjob. See the installation article on this weblog for the installation and setup instructions. The reports are published as XML files in an automatically created document library. This library can be added to every site collection, or to a central storage location. Every time a blueprint is generated (manually or scheduled), the library is checked if a report was previously published for the site collection. If this is not the case, the report is published as a Full Report. If a report was previously published, this report is compared to the new report. If there are changes, a new Full Report is published. If there are no changes, a No Changes report is published.
You can exclude specific parts of your site collections by configuring Endpoints. See the installation article for details.

The screenshot below shows the blueprints library after the first 3 runs of the process in an empty site collection based on the Collaboration Portal template. After the first run, I have created custom permissions for the Reports site and the document library in the document center. This results in a new full report in the second run. In the 3rd run, there were no changes, as can be seen in the screenshot. To get an idea of what a security blueprint report looks like, the last Full Report of this site collection is available on this link.

image

Another scenario where security blueprints can help is when you have multiple site collections that upon launch have the same structure and security setup. Before the lauunch of your new site collections, you create a blueprint of the new site collection. Now your site collection administrators can go wild and do their thing. By automatically publishing a new security report if something changes in the security setup, it is much easier for you to track when security settings are changed. This can make troubleshooting these nasty secuity issues a lot easier. It allows you to identify the differences between the original security setup (the blueprint) and the current setup in your site collections.

Download

You can download Security Blueprints on the SharePoint Objects site on CodePlex.

Permanent link to this article: http://www.tonstegeman.com/blog/2011/09/security-blueprints-introduction/

Sep
15
2009

Introducing SharePoint Security Blueprints

My CodePlex site contains a new product called Security Blueprints. I have created this for one of our customers and made some enhancements  to it. They allowed me to publish this as open source project (thank you for that!). I hope the solution can help you as it helps them and I hope you like the idea behind it. The current version is a V1 product. There are a lot of ideas for the next version, that I will soon start to work on.

Security blueprints ‘document’ all security settings in your site collections. It comes with a timerjob that repeats this task every time the job runs. It only creates a new report (a blueprint) after something in the settings (or structure) has changed. This allows you to monitor the security setup and should make troubleshooting security issues easier. Our customer that came up with the idea of the blueprints, works with an increasing number of site collections that all have the same basic structure and security setup. During the lifecycle of these site collections, people start modifying structure and security settings. The blueprints helped us in a number of cases to identify the cause of the problem. The blueprint of the ‘master’ site collection is regarded as the documentation of the security setup. By comparing this blueprint with the current report of the site collection, we were able to quickly identify the problem.

This article contains more information about the Security Blueprints.
If you want to test the blueprints, you can download it on CodePlex. The installation instructions are documented in this article.

If you have any suggestions for improvement, or you would like to write a XSLT stylesheet to make the reports more readable, feel free to contact me.

Permanent link to this article: http://www.tonstegeman.com/blog/2009/09/introducing-sharepoint-security-blueprints/

Sep
12
2009

Security Blueprints – installation

This article describes how to install the Security Blueprints in your SharePoint environment. The first step is to install the solution package. After you have done this, this article shows you how to configure the security blueprints. The last part of this article describes how you can manually start the process for 1 site collection.

Step 1 – Install the solution package

The first step is to install the Security Blueprints software to your environment. Unzip the file that you have downloaded from CodePlex to a folder on the server that is running Central Administration.

Start setup.exe and click Next. image
The installer runs a system check. If none of the checks fails, you can continue the installation by clicking Next. image
In this dialog, select the web applications that will use the Security Blueprints features. Click Next. image
The installer will now install the software to your SharePoint environment. Click Next after the process completes. image
If all steps were successfull, click the Close button. image

SharePoint Objects Security Blueprints are now installed in your SharePoint farm. The installation process has installed these files and folders to your server(s):

Name Location
TST.SharePointObjects.SecurityBluePrint.dll Global Assembly Cache
CreateSecurityBlueprint.aspx 12\TEMPLATE\LAYOUTS\TST\
CreateBluePrintsTimerJobSettings.aspx 12\TEMPLATE\ADMIN\TST\
tstfeature.gif 12\TEMPLATE\IMAGES\TST\
feature.xml 12\TEMPLATE\FEATURES\TST.SharePointObjects.SecurityBluePrint.Menu\
menu.xml 12\TEMPLATE\FEATURES\TST.SharePointObjects.SecurityBluePrint.Menu\
feature.xml 12\TEMPLATE\FEATURES\TST.SharePointObjects.SecurityBluePrint.CreateBluePrintsTimerJob\

Step 2 – Configure the timer job

Security blueprints are generated by a SharePoint timerjob, that can be installed by activating a feature. Navigate to the Central Administration of your SharePoint farm. On the Application Management tab, select Mangage Web application features. On this page, find the web application that runs the site collections that you want to monitor using the security blueprints. Then click the Activate button for the feature ‘SharePoint Objects – Security Blueprint Menu’.

image

The timer job is now installed, it can be configured by using a special administration page. The menu to navigate to this administration page can be activating a site collection feature. Navigate to the Site Settings of the Central Administration site. In the Site Collection Administration section, click Site collection features. Find the feature called ‘SharePoint Objects – Security Blueprint Menu’ and click Activate.

image

If you now navigate to the Application Management tab in Central Administration, you will find a new section called ‘SharePoint Objects’. This section now has a menu option called ‘Configure timerjob for creating security blueprints’. Click this link to configure the timerjob. The first section on this page lets you choose a web application.

image

If you select a web application that does not have the Security Blueprint Timerjob featere activated, the Status field will notify you the timerjob is not activated. If the feature is activated, the Status field will show the last run time of the timerjob. In this section you can also set the display title for the timerjob and the schedule.

The second section on the configuration page allows you to configure the location where the blueprints will be stored. When the blueprint timerjob runs, it creates a security blueprint for every site collection in the web application. This blueprint is saved as a XML file in an automatically created document library. By configuring the Library Site Url setting, you can decide where the timerjob publishes the blueprint.

image

There are 3 options:

  • Leave the setting empty
    The blueprint library is created in the root site of each site collection.
  • Enter a relative url (e.g. ‘/admin/blueprints’)
    The blueprint library is created in each site collection, in the subsite with this url. If there is no subsite found on this url, the blueprints are saved in the root site of each site collection.
  • Enter an absolute url (e.g. http://admin.intranet/blueprints)
    All blueprints of all site collections are stored in 1 document library. The timer job creates a subfolder for each site collection. These folder are hidden from the user in the view. This allows you to manage the blueprints in a central location.

The last section of the timerjob setup page allows you to configure endpoints. Endpoints are relative urls to specific sub sites in your site collections. The blueprint process stops generating the blueprint XML at this site, if the url equals one of the endpoints. Suppose you have a subsite called ‘Projects’. This site has a number of subsites for a number of projects. You are interested in the security settings of this Projects site, but the security settings for each project site are not important. You can enter ‘/Projects’ as an endpoint, meaning the Projects site is the last site in the tree to be included in the blueprint. You can now add new project sites to your site collection(s) without changing the security blueprint for your site collection. Otherwise every new project site is seen as a change to the security blueprint of the site collection, and a new report is published.

image

You can enter multiple endpoints by putting every endpoint on a new line in the text box.

Step 3 – Start the process manually

The Security Blueprints allow you to start the process manually for a single site collection. If you do not have the feature activated for the site collection, navigate to the Site Settings of the root site in your site collection. In the Site Collection Administration section, select Site collection features. Find the feature called ‘SharePoint Objects – Security Blueprint Menu’ and click the Activate button.

image

If you navigate to the Site Settings page, this page will have a new section called SharePoint Objects. This section has a menu option called ‘Create security blueprint’. This link is available for every subsite in the site collection. This allows you to create a blueprint for just 1 subsite, instead of a full report for all sites in the site collection. The root site of the site collection is always included in the blueprint.

image

After clicking this link, you can manually start the process by clicking the Create button. You can publish the blueprint to a specific location or a central location in your farm by entering a url. See Step 2 in this article for the details. The paragraph also contains an explanation of the endpoints you can configure.

image

After clicking the Create button, the blueprint is created and you are redirected to the library that contains the report.-

Permanent link to this article: http://www.tonstegeman.com/blog/2009/09/security-blueprints-installation/

Aug
23
2009

JQuery and SharePoint – Lookup fields and event lists

In one of my recent projects I used some JQuery to change the width of a mulitvalued lookup field in SharePoint and to hide the Workspace field in an event list.

The script to change the width of multivalued lookup fields:

<script type="text/javascript">
$(document).ready(function()
{
   $("select[id*='<INSERT_YOUR_FIELDNAME>']").parent().width(300);
   $("select[id*='<INSERT_YOUR_FIELDNAME>']").parent().height(200);
   $("select[id*='<INSERT_YOUR_FIELDNAME>']").width(300);
});
</script>

When the lookup list has a lot of similar items, it is now much easier for users to pick the right items:

image

The script to hide the workspace checkbox in an event list:

<script type="text/javascript">
$(document).ready(function()
{
   $("span[title='Workspace']").parent().parent().parent().parent().hide();
});
</script>

This first bit of script hides the ‘Workspace’checkbox in the NewForm and the EditForm. The script to hide the workspace field from the DispForm.aspx:

<script type="text/javascript">
$(document).ready(function()
{
   $("a[name='SPBookmark_WorkspaceLink']").parent().parent().parent().hide();
});
</script>

There are several ways to add the script to the pages. For the Lookup fields I used the I descibe in this blog post. For the Event list, I created custom EditForm, NewForm and DispForm pages and added the script to those pages directly.

Permanent link to this article: http://www.tonstegeman.com/blog/2009/08/jquery-and-sharepoint-lookup-fields-and-event-lists-2/

May
22
2009

SharePoint Filter web parts: using a context filter in a page layout

For our e-office intranet I was working on a number of page layouts. In this page layout I wanted to use the out of the box Page Field Filter web part. After creating a new page using that page layout, the page crashed immediatly, showing the error message “An unexpected error has occurred”. After switching off customErrors in web.config, the error message was “The Hidden property cannot be set on Web Part ‘g_8271d6f6_a902_4fa4_88ce_ca9ae1b0d463′, since it is a standalone Web Part.”.

Context filter web parts are not visible at runtime, the only show up when the page is in edit mode. The web parts are using the hidden property to hide themselves. The way this is done does not work when using the web part directly in a page layout, resulting in this error message. I decided to change the Page Column Filter web part I released on CodePlex, to make this work.

In this web part I created a new override of the Hidden property. If there is a web part zone in which the web part is used, it behaves normally. If there is no web part zone, the property always returns false to prevent the web part from throwing the error message above. Here’s the code:

[Browsable(false)]
public override bool Hidden
{
    get
    {
        if (base.WebPartManager == null)
        {
            return base.Hidden;
        }
        if (this.Zone == null)
        {
            return false;
        }
        return !base.WebPartManager.DisplayMode.AllowPageDesign;
    }
    set
    {
        base.Hidden = value;
    }
}

Because our web part now returns false when used in a page layout, we need another way to hide the web part at runtime. To do this I also created another override of the Visible property. Here is the code:

[Browsable(false)]
public override bool Visible
{
    get
    {
        if (base.WebPartManager == null)
        {
            return base.Visible;
        }
        if (this.Zone != null)
        {
            return true;
        }
        return base.WebPartManager.DisplayMode.AllowPageDesign;
    }
    set
    {
        base.Visible = value;
    }
}

Please note that this code snippet always returns true if there is a web part zone. If you do not do this, your web part will throw an error when the web part is used in a web part zone (by adding it to the page through the web part gallery).

Now our context filter web part works as expected when used as a normal web part and when used in a page layout.

Permanent link to this article: http://www.tonstegeman.com/blog/2009/05/sharepoint-filter-web-parts-using-a-context-filter-in-a-page-layout/

Older posts «