Active Directory Module for Windows PowerShell – Quick start guide

ADPowershell is available starting Windows Server 2008 R2. To play with AD Powershell cmdlets, you must have at least one Windows Server 2008 R2 domain controller (DC) in your domain.

Installing AD Powershell module:

On a Windows Server 2008 R2 box, open an elevated Powershell console window (powershell.exe) and run the following commands:

PS C:\> import-module servermanager
PS C:\> Add-WindowsFeature -Name "RSAT-AD-PowerShell" -IncludeAllSubFeature
NOTE: AD Powershell module is installed by default on a DC.

Loading AD Powershell module:

Open a Powershell console window and type

PS C:\> import-module activedirectoryActive Directory PSDrive:

If the machine is joined to a domain then a default drive named AD: is created. You can CD into this drive and use all the regular file system commands to navigate the directory. The paths are in X500 format.

PS C:\> cd AD:
PS AD:\>
PS AD:\> dir

PS AD:\> cd "DC=fabrikam,DC=com"
PS AD:\DC=fabrikam,DC=com> md "OU=myNewOU"

PS AD:\DC=fabrikam,DC=com> del "OU=myNewOU"
If you want to create a new drive connected to another domain/forest or use the more readable canonical path format, type:

PS C:\> New-PSDrive -PSProvider ActiveDirectory -Server "contoso.fabrikam.com" -Credential "Contoso\Administrator" -Root ""  -Name Contoso -FormatType Canonical

PS C:\> cd Contoso:
PS Contoso:\> dir | ft CanonicalName

PS Contoso:\> cd "contoso.fabrikam.com/"

Getting cmdlet list, help and examples:

Powershell uses verb-noun name-pair format to name cmdlets. For example:

New-ADGroup
Get-ADDomain
To get a list of AD cmdlets type

PS AD:\> get-help *-AD*
PS AD:\> get-help New-AD*        ## would list all the cmdlets that create new AD objects
To get more info on a specific cmdlet or read examples, type

PS AD:\> get-help set-aduser -detailed
PS AD:\> get-help get-aduser -examples
Tips: You can use the tab completion feature of Powershell to complete cmdlet names or parameter names. For example after entering the Verb- part of a cmdlet name you can hit <TAB> key to cycle through all of the nouns available for that verb.

Common tasks:

Here are some examples of commonly performed tasks using AD cmdlets:

PS C:\> New-ADUser –Name "John Smith" –SamAccountName JohnS –DisplayName "John Smith" –Title "Account Manager" –Enabled $true –ChangePasswordAtLogon $true -AccountPassword (ConvertTo-SecureString "p@ssw0rd" -AsPlainText -force) -PassThru

PS C:\> New-ADGroup -Name "Account Managers" -SamAccountName AcctMgrs -GroupScope Global -GroupCategory Security -Description "Account Managers Group" –PassThru

PS C:\> New-ADOrganizationalUnit -Name AccountsDepartment -ProtectedFromAccidentalDeletion $true  -PassThru

PS C:\> Get-ADUser -Filter { name –like "john*" } ## Gets all the users whose name starts with John

PS C:\> Add-ADGroupMember -Identity AcctMgrs -Members JohnS

PS C:\> Get-ADGroupMember -Identity AcctMgrs

PS C:\> Get-ADPrincipalGroupMembership -Identity JohnS  ## Gets all the groups in which the specified account is a direct member.

PS C:\> Get-ADAccountAuthorizationGroup -Identity JohnS  ## Gets the token groups of an account

PS C:\> Unlock-ADAccount -Identity JohnS

PS C:\> Get-ADForest -Current LocalComputer

PS C:\> Get-ADDomain -Current LoggedOnUser

PS C:\> Get-ADDomainController -Filter { name -like "*" }  ## Gets all the DCs in the current domain

What next?

In the next post we will give an overview of Active Directory Powershell and talk about various cmdlets we provide in this release.

Enjoy!
Swami


Swaminathan Pattabiraman [MSFT]
Developer – Active Directory Powershell Team

  • 27 Feb 2009 9:53 AM

    #

    Hello Swaminathan, thanks for opening this blog.

    Why you _require_ -Server parameter in New-PsDrive? You can provide default value for it pointing to current logon server for example. Same about -root parameter which can easily defaults to “” as in your example.

    Why not to make Canonical names default format btw? X500 requres quotes “every,time,when used,because, of, commas”, it right to left so hard to type, and tabcompletion works only on current level ( so you cant do cd mydomain.com\myou\[tab] for example).

    Anyway, thanks even for creating this option at all 🙂

    [PS <560> AD:\] Get-ADDomain

    Get-ADDomain : Parameter set cannot be resolved using the specified named parameters.

    Event if it cant be resolved (why not return my logon domain?) why not to ask me about required parameters, or return all matching objects, like Get-Process do for example?

    Same relates to all other your Get-* cmdlets.

    Get-ADUser -Filter { name –like “john*” } ## Gets all the users whose name starts with John

    Why not Get-ADSomething john* or even Get-ADSomething john ? You can use query by ANR (http://support.microsoft.com/kb/243299) as default parameter, and this will be perfect choice. Or another solution, just dont leave us with this ugly one. BTW, how to get _all_ users? 😉 Get-ADSomething (without params) should work. All other PowerShell cmdlets work this way, just look around.

    Is Get-ADAccountAuthorizationGroup is nothing other but Get-ADPrincipalGroupMembership with recurse parameter?

    Better to add Get-ADGroupMember and Get-ADPrincipalGroupMembership lacks -recurse parameter. This is high resurce consuming operation sometimes, but its very important and popular scenario.

    Why in one case you use “Principal” (Get-ADPrincipalGroupMembership) and in another “Account” (Get-ADAccountAuthorizationGroup)? As it seems to me – its equal meanings there. BTW, IMHO “ADObject” is better and more intuitive 😉

    Again…

    Get-ADDomainController -Filter { name -like “*” }  ## Gets all the DCs in the current domain

    Why not just Get-ADDomainController ? 🙂

    Thats all for today 🙂 I hope my silly critics somehow help you build the real PowerAD 😉 Thanks for your work.

    Vasily Gusev, MVP: Admin Frameworks.

  • 27 Feb 2009 10:56 AM

    #

    Almost forgotten… About Search-ADAccount… There is no such verb as Search- or Find- in PowerShell, and no need in it.

    There is quote from PowerShell concepts about verbs(http://msdn.microsoft.com/en-us/library/ms714428.aspx):

    Get

    Retrieves a resource. For example, the Get-Content cmdlet retrieves the content of a file. Pairs with Set.

    Do not use verbs such as Read, Open, Cat, Type, Dir, Obtain, Dump, Acquire, Examine, Find, or _Search_.

    All this functionality that it provides, must be built in the Get-AD* cmdlets.

    There is no good in building more and more cmdlets just for separate some aspects of same general task (exept if you get bonuses for it ;)). Get-ADObject (Account/Principal/Whatever) should Get any ad objects in any way that I want (I’m dont want to search, i want GET ;)). Get-ADUser/Computer is just special aliases for some popular types.

    Same with Set. Set-ADSomething should set any of Something properties, like password for example. Reset-ADPrincipalPassword doesnt hurt while it “alias” for Set-AdAccount -Password (Get-Credential).

    All this will make AD part of PowerShell better integrate in whole system.

    And… I’m dont noticed formatting of ad objects, just because I think it will be done some time later prior to release. Is it in plans? 🙂

    Vasily Gusev, MVP: Admin Frameworks.

  • 3 Mar 2009 1:57 AM

    #

    Thanks Vasily for the feedback. Here are some answers to specific questions.

    >> 1. Why you _require_ -Server parameter in New-PsDrive?

    -Server parameter is optional in all our cmdlets and by default the cmdlets talk to a suitable DC in the computer’s domain.

    >> 2. -root parameter which can easily defaults to “”

    Fair point.

    >> 3. Regarding – Why not Get-ADSomething john* or even Get-ADSomething john ? You can use query by ANR ..

    >> Get-ADDomainController -Filter { name -like “*” }  ## Gets all the DCs in the current domain

    >> Get-ADDomain : Parameter set cannot be resolved using the specified named parameters.

    We are working on the default behavior of all the cmdlets and the experience should be better in the next release 🙂

    The default parameter set for get directory object cmdlets such as: Get-ADObject, Get-ADUser, Get-ADGroup etc. is -Identity.

    The purpose of -Identity is to uniquely identify an object in a domain. Thus we only support identities (such as: distinguishedName, objectGuid, objectSid and samAccountName) that are guaranteed to be unique by the server. For certain special objects (example: Fine Grained Password policy, Site, Domain controller etc.) we support “name” as the identity.

    We will write more about Identity in a separate blog.

    Since, ANR can potentially return more than objects it does not qualify as Identity. However, you can run a ANR query using filter.

    PS C:\> get-aduser -Filter { anr -eq “John” }

    For getting all users type:

    PS C:\> get-aduser -Filter { name -like “*” }

    >> 4. Is Get-ADAccountAuthorizationGroup is nothing other but Get-ADPrincipalGroupMembership with recurse parameter?

    Not exactly. Get-ADAccountAuthorizationGroup returns all the security groups in which an account is a direct or indirect member. It does not include Distribution Groups.

    The returned set may also include additional groups that system would consider the user a member of for authorization purposes.

    >> 5. Why in one case you use “Principal” (Get-ADPrincipalGroupMembership) and in another “Account” (Get-ADAccountAuthorizationGroup)?

    Good question. We would like to address this in a separate blog. Watch out for a topic on “ADObject model”

    >> 6. About Search-ADAccount… There is no such verb as Search- or Find- in PowerShell, and no need in it.

    It is a valid verb in Powershell V2 (http://blogs.msdn.com/powershell/archive/2007/05/09/proposed-new-standard-verbs.aspx)

    >> 7. There is no good in building more and more cmdlets just for separate some aspects of same general task.

    Again a good question, but I would prefer to address this in a separate blog.

    For now here is a short answer:

    Get-ADUser/ADComputer are not just special aliases. They retrieve additional data and display them in rich format. They also accept data in rich format inside -Filter parameter.

    Similarly, Set-ADUser,Set-ADComputer, New-ADUser, New-ADGroup etc. provides additional/relevant parameters for creating/writing the respective objects.

    >> 8. And… I’m dont noticed formatting of ad objects, just because I think it will be done some time later prior to release. Is it in plans? 🙂

    Ah.. we thought no one would notice 🙂

    Once again thanks for the feedback. Keep them coming.

    Cheers,

    Swami

  • 3 Mar 2009 2:16 AM

    #

    Brandon Shell pointed out an elegant way to get a list of AD cmdlets. Here it is..

    PS C:\> get-command -module ActiveDirectory -verb get

    PS C:\> get-command -module ActiveDirectory -noun ADUser

    Cheers,

    Swami

  • 6 Mar 2009 1:09 AM

    #

    >The default parameter set for get directory object cmdlets such as: Get-ADObject, Get-ADUser, Get-ADGroup etc. is -Identity.

    >get-aduser -Filter { anr -eq “John” }

    You can have more than one default parameter (in different parameter sets), so it can easily be -Identity, and then (if input not valid X500 path) fallback to -Anr.

  • 6 Mar 2009 1:10 AM

    #

    > Ah.. we thought no one would notice 🙂

    You joking? 🙂 This is hard to beleive 🙂

  • 6 Mar 2009 5:41 PM

    #

    @Xaegr

    >> Ah.. we thought no one would notice 🙂

    > You joking? 🙂 This is hard to beleive 🙂

    Yes, I was just joking. Btw, was your comment regarding Provider cmdlet output? Or for all AD cmdlets?

    Cheers,

    Swami

  • 12 Mar 2009 1:23 AM

    #

    No, output from get-aduser is fine for me for example.

    Only one suggestion, please accept wildcard chars for -Properties parameter 🙂 Not all can remember ad property names form objects, so get-aduser someone -prop *logon* will be useful. And get-aduser someone -prop * of course.

  • 12 Mar 2009 10:32 PM

    #

    -Properties parameter does support * and returns all properties + ldap attributes set on the object.

    It does not support wildcard chars on the parameters. You can query the schema to get a list of all ldap attributes that can be set on an AD object.

    Here is a Powershell function that does this:

    function GetPossibleLdapAttributes() {

    Param ([Parameter(Mandatory=$true, Position=0)] [String] $ObjectClass)

    $rootDSE = Get-ADRootDSE

    $schemaObject = get-adobject  -filter { ldapdisplayname -like $ObjectClass } -Properties mayContain, SystemMayContain -searchbase  $rootDSE.SchemaNamingContext

    $schemaObject.MayContain

    $schemaObject.SystemMayContain

    }

    Type:

    PS C:\> GetPossibleLdapAttributes computer

    PS C:\> GetPossibleLdapAttributes user

    Cheers,

    Swami

  • 19 Apr 2009 12:31 PM

    #

    On cmdlets like new-aduser could we have -organizationalunit rather than -path  (an alias on the parameter would be acceptable).

    AD admins think in terms of OUs rather than paths plus it would be consistent with Exchange

  • PeterW
    8 Dec 2011 10:26 AM

    #

    ServerManager Best Practices for AD scan is showing two problems:

    1. ActiveDirectory-Powershell is not installed

    I’ve tried enabling it, but I’m told the feature isn’t recognized, even though dism /online /get-features lists it.

    2. Strict replication consistency should be enabled

    Not sure if I should do this considering the warning about lingering objects and possible forest-wide authentication issues if LOs exist and strict is enabled.

    How can I reinstall the ActiveDirectory-Powershell feature and enable it?

    Should I worry about the strict setting?

    Help!

Group Policy Cmdlets in Windows PowerShell

Group Policy Cmdlets in Windows PowerShell

30 out of 40 rated this helpful – Rate this topic

The Windows PowerShell command-line and scripting language can be used to automate many Group Policy tasks, including configuring registry-based policy settings and various Group Policy Management Console (GPMC) tasks. To help you perform these tasks, the Group Policy module for Windows PowerShell provides the cmdlets covered in this section.

You can use these Group Policy cmdlets to perform the following tasks for domain-based Group Policy objects (GPOs):

  • Maintain GPOs: GPO creation, removal, backup, reporting, and import.
  • Associate GPOs with Active Directory Directory Services (AD DS) containers: Group Policy link creation, update, and removal.
  • Set inheritance and permissions on AD DS organizational units (OUs) and domains.
  • Configure registry-based policy settings and Group Policy Preferences Registry settings.

Group Policy Cmdlet Prerequisites

To use the Windows PowerShell cmdlets for Group Policy, you must be running one of the following:

Windows Server 2008 R2 on a domain controller

–or–

Windows Server 2008 R2 on a member server that has the GPMC installed

–or–

Windows® 7 with Remote Server Administration Tools (RSAT) installed. (RSAT includes the GPMC and the Group Policy cmdlets)

Getting Started with the Group Policy Cmdlets

You must use the import-module grouppolicy command to import the Group Policy module before you use the Group Policy cmdlets. You can also modify your Windows PowerShell profile to import the Group Policy module every time you start a session. For more information, see about_Modules.

You can use the get-command –module grouppolicy to get a list of all Group Policy commands.

You can get help for all Group Policy commands at once by using the get-command –module grouppolicy | get-help command.

Ee461027.note(en-us,TechNet.10).gifNote
For more information about the Group Policy cmdlets, you can use the get-help<cmdlet-name> and get-help<cmdlet_name>-detailed commands to display basic and detailed help, respectively. 

Because the information displayed by the get-help cmdlet can span many screens, the help alias is provided to display the first page of information. You can then press the spacebar to view subsequent pages of information. This has the same effect as using the more command—for example, get-help <your parameters> | more

 

Group Policy Cmdlets

Name Description
Backup-GPO Backs up one GPO or all the GPOs in a domain.
Copy-GPO Copies a GPO.
Get-GPInheritance Retrieves Group Policy inheritance information for a specified domain or OU.
Get-GPO Gets one GPO or all the GPOs in a domain.
Get-GPOReport Generates a report in either XML or HTML format for a specified GPO or for all GPOs in a domain.
Get-GPPermissions Gets the permission level for one or more security principals on a specified GPO.
Get-GPPrefRegistryValue Retrieves one or more registry preference items under either Computer Configuration or User Configuration in a GPO.
Get-GPRegistryValue Retrieves one or more registry-based policy settings under either Computer Configuration or User Configuration in a GPO.
Get-GPResultantSetOfPolicy Outputs the Resultant Set of Policy (RSoP) information to a file, for a user, a computer, or both.
Get-GPStarterGPO Gets one Starter GPO or all Starter GPOs in a domain.
Import-GPO Imports the Group Policy settings from a backed-up GPO into a specified GPO.
New-GPLink Links a GPO to a site, domain, or OU.
New-GPO Creates a new GPO.
New-GPStarterGPO Creates a new Starter GPO.
Remove-GPLink Removes a GPO link from a site, domain, or OU.
Remove-GPO Deletes a GPO.
Remove-GPPrefRegistryValue Removes one or more registry preference items from either Computer Configuration or User Configuration in a GPO.
Remove-GPRegistryValue Removes one or more registry-based policy settings from either Computer Configuration or User Configuration in a GPO.
Rename-GPO Assigns a new display name to a GPO.
Restore-GPO Restores one GPO or all GPOs in a domain from one or more GPO backup files.
Set-GPInheritance Blocks or unblocks inheritance for a specified domain or OU.
Set-GPLink Sets the properties of the specified GPO link.
Set-GPPermissions Grants a level of permissions to a security principal for one GPO or for all the GPOs in a domain.
Set-GPPrefRegistryValue Configures a registry preference item under either Computer Configuration or User Configuration in a GPO.
Set-GPRegistryValue Configures one or more registry-based policy settings under either Computer Configuration or User Configuration in a GPO.

Running Windows PowerShell Scripts

Windows PowerShell Owner's Manual
This is your guide to getting started with Windows PowerShell. Read through these pages to get familiar with Windows PowerShell, and soon you’ll be driving around like a pro.

On This Page

Running Windows PowerShell Scripts
Running Scripts From Within Windows PowerShell
Even More About File Paths
Bonus: “Dot Sourcing” a Script
Running Scripts Without Starting Windows PowerShell
See? That Wasn’t So Bad

Running Windows PowerShell Scripts

Customizing the Conosle

Few things in life are as exciting as getting a brand-new command shell and scripting language; in fact, getting a brand-new command shell and scripting language is so exciting that you can barely get the thing out of the box before you want to take it for a spin. Those of you who’ve downloaded Windows PowerShell know exactly what we’re talking about: if you’re like most people, the very moment the installation process finished you double-clicked a .PS1 file (.PS1 being the file extension for Windows PowerShell scripts), sat back, and waited for the magic to happen.

As it turned out, however, this is what happened:

PowerShell Script in Notepad

Hmmm, instead of running, your script opened up in Notepad. Interesting, but not exactly what you had in mind. Oh wait, you think, I get it: you probably have to run Windows PowerShell before you can run a Windows PowerShell script. OK, that makes sense. And so, with that in mind, you open up Windows PowerShell and type the path to the .PS1 file at the command prompt. You press ENTER and wait for the magic to happen:

As it turned out, however, this is what happens:

File C:\scripts\test.ps1 cannot be loaded because the execution of scripts is disabled on this system. Please see "get-
help about_signing" for more details.
At line:1 char:19
+ c:\scripts\test.ps1 <<<<

Wow; how nice. A new command shell and scripting environment that doesn’t even let you run scripts. What will those guys at Microsoft think of next?

Listen, don’t panic; believe it or not, everything is fine. You just need to learn a few little tricks for running Windows PowerShell scripts. And the Scripting Guys are here to help you learn those tricks.

Running Scripts From Within Windows PowerShell

Customizing the Conosle

Let’s start with running scripts from within Windows PowerShell itself. (Which, truth be told, is probably the most common way to run Windows PowerShell scripts.) Why do you get weird error messages when you try to run a script? That’s easy. The security settings built into Windows PowerShell include something called the “execution policy;” the execution policy determines how (or if) PowerShell runs scripts. By default, PowerShell’s execution policy is set to Restricted; that means that scripts – including those you write yourself – won’t run. Period.

Note. You can verify the settings for your execution policy by typing the following at the PowerShell command prompt and then pressing ENTER:

Get-ExecutionPolicy

Now, admittedly, this might seem a bit severe. After all, what’s the point of having a scripting environment if you can’t even run scripts with it? But that’s OK. If you don’t like the default execution policy (and you probably won’t) then just go ahead and change it. For example, suppose you want to configure PowerShell to run – without question – any scripts that you write yourself, but to run scripts downloaded from the Internet only if those scripts have been signed by a trusted publisher. In that case, use this command to set your execution policy to RemoteSigned:

Set-ExecutionPolicy RemoteSigned

Alternatively, you can set the execution policy to AllSigned (all scripts, including those you write yourself, must be signed by a trusted publisher) or Unrestricted (all scripts will run, regardless of where they come from and whether or not they’ve been signed).

See? No need to need to panic at all, is there?

Note. Not sure what we mean by “signing scripts?” Then open up PowerShell, type the following, and press ENTER:

Get-Help About_Signing

Or, even better, download our Windows PowerShell Graphical Help File and read the same topic in standard Windows help format.

After you change your execution policy settings it’s possible to run scripts. However, you still might run into problems. For example, suppose you change directories from your Windows PowerShell home directory to C:\Scripts (something you can do by typing cd C:\Scripts). As it turns out, the C:\Scripts folder contains a script named Test.ps1. With that in mind you type the following and the press ENTER:

Test.ps1

And here’s the response you get:

The term 'test.ps1' is not recognized as a cmdlet, function, operable program, or script file. Verify the term and try again.
At line:1 char:7
+ test.ps1 <<<<

We know what you’re thinking: didn’t we just change the execution policy? Yes, we did. However, this has nothing to do with the execution policy. Instead, it has to do with the way that PowerShell handles file paths. In general, you need to type the complete file path in order to run a script. That’s true regardless of your location within the file system. It doesn’t matter if you’re in C:\Scripts; you still need to type the following:

C:\Scripts\Test.ps1

Now, we said “in general” because there are a couple of exceptions to this rule. For example, if the script happens to live in the current directory you can start it up using the .\ notation, like so:

.\Test.ps1
Note. There’s no space between the .\ and the script name.

And while PowerShell won’t search the current directory for scripts it will search all of the folders found in your Windows PATH environment variable. What does that mean? That means that if the folder C:\Scripts is in your path then you can run the script using this command:

Test.ps1

But be careful here. Suppose C:\Scripts is not in your Windows path. However, suppose the folder D:\Archive is in the path, and that folder also contains a script named Test.ps1. If you’re in the C:\Scripts directory and you simply type Test.ps1 and press ENTER, guess which script will run? You got it: PowerShell won’t run the script in C:\Scripts, but it will run the script found in D:\Archive. That’s because D:\Archive is in your path.

Just something to keep in mind.

Note. Just for the heck of it, here’s a command that retrieves your Windows PATH environment variable and displays it in a readable fashion:

$a = $env:path; $a.Split(";")

Even More About File Paths

Customizing the Conosle

Now we know that all we have to do is type in the full path to the script file and we’ll never have to worry about getting our scripts to run, right? Right.

Well, almost right. There’s still the matter of scripts whose path name includes a blank space. For example, suppose you have a script stored in the folder C:\My Scripts. Try typing this command and see what happens:

C:\My Scripts\Test.ps1

Of course, by now you’ve come to expect the unexpected, haven’t you? Here’s what you get back:

The term 'C:\My' is not recognized as a cmdlet, function, operable program, or script file. Verify the term and try again.
At line:1 char:8
+ C:\My  <<<< Scripts\Test.ps1

This one you were able to figure out on your own, weren’t you? Yes, just like good old Cmd.exe, PowerShell has problems parsing file paths that include blank spaces. (In part because blank spaces are how you separate command-line arguments from the call to a script.) In Cmd.exe all you can work around this problem by enclosing the path in double quotes. Logically enough, you try the same thing in PowerShell:

"C:\My Scripts\Test.ps1"

And here’s what you get back:

"C:\My Scripts\Test.ps1"

Um, OK …. You try it again. And here’s what you get back:

"C:\My Scripts\Test.ps1"

You try it – well, look, there’s no point in trying it again: no matter how many times you try this command, PowerShell will simply display the exact same string value you typed in. If you actually want to execute that string value (that is, if you want to run the script whose path is enclosed in double quotes) you need to preface the path with the Call operator (the ampersand). You know, like this:

& "C:\My Scripts\Test.ps1"
Note. With this particular command you can either leave a space between the ampersand and the path name or not leave a space between the ampersand and the path name; it doesn’t matter.

To summarize, here’s how you run from scripts from within Windows PowerShell:

  • Make sure you’ve changed your execution policy. By default, PowerShell won’t run scripts at all, no matter how you specify the path.
  • To run a script, specify the entire file path, or either: 1) use the .\ notation to run a script in the current directory or 2) put the folder where the script resides in your Windows path.
  • If your file path includes blank spaces, enclose the path in double quote marks and preface the path with an ampersand.

And, yes, that all takes some getting used to. However, you will get used to it. (To make life easier for you, we recommend that you keep all your scripts in one folder, such as C:\Scripts, and add that folder to your Windows path.)

Note. So can you use PowerShell to add a folder to your Windows Path? Sure; here’s a command (that we won’t bother to explain in this introductory article) that tacks the folder C:\Scripts onto the end of your Windows path:

$env:path = $env:path + ";c:\scripts"

Bonus: “Dot Sourcing” a Script

Customizing the Conosle

Admittedly, up to this point the news hasn’t been all that good: you can’t run a PowerShell script by double-clicking the script icon; PowerShell doesn’t automatically look for scripts in the current working directory; spaces in path names can cause all sorts of problems; etc. etc. Because of that, let’s take a moment to talk about one very cool feature of Windows PowerShell scripting: dot sourcing.

Suppose we have a very simple VBScript script like this one:

A = 5
B = 10
C = A + B

If you run this script from the command window, the script will run just fine. However, because we forgot to include an Echo statement we won’t see anything happen onscreen. Because of that we’ll never know the value of C. Sure, we could try typing Wscript.Echo C at the command prompt, but all we’ll get back is the following error message:

'Wscript.echo' is not recognized as an internal or external command,
operable program or batch file.

That should come as no surprise: scripts are scripts, the command window is the command window, and ne’er the twain shall meet. Sure, it would be nice if the command window had access to values that were assigned in a script (and vice-versa), but it ain’t gonna happen.

At least not in VBScript.

Now, let’s consider a Windows PowerShell counterpart to our VBScript script:

$A = 5
$B = 10
$C = $A + $B

Suppose we run this script, then type $C at the command prompt. What do you think we’ll get back? If you guessed nothing, then you guessed correctly:

Variables

In other words, we don’t get back anything at all. Which, again, should come as no great surprise. Come on, Scripting Guys; shouldn’t this be leading us somewhere?

Yes, it should. And believe it or not, it is. Let’s run our PowerShell script again, only this time let’s “dot source” it; that is, let’s type a period and a blank space and then type the path to the script file. For example:

. c:\scripts\test.ps1

When we run the script nothing will seem to happen; that’s because we didn’t include any code for displaying the value of $C. But now try typing $C at the command prompt . Here’s what you’ll get back:

15

Good heavens! Was this a lucky guess on the part of the PowerShell console, or is this some sort of magic?

Surprisingly enough, it’s neither. Instead, this is dot sourcing. When you dot source a script (that is, when you start the script by prefacing the path to the script file with a dot and a blank space) any variables used in the script become global variables that are available in multiple scopes. What does that mean? Well, a script happens to represent one scope; the console window happens to represent another scope. We started the script Test.ps1 by dot sourcing it; that means that the variable $C remains “alive” after the script ends. In turn, that means that this variable can be accessed via the command window. In addition, these variables can be accessed from other scripts. (Or at least from other scripts started from this same instance of Windows PowerShell.)

Suppose we have a second script (Test2.ps1) that does nothing more than display the value of the variable $C:

$C

Look what happens when we run Test2.ps1 (even if we don’t use dot sourcing when starting the script):

15

Cool. Because $C is a global variable everyone has access to it.

And, trust us here: this is pretty cool. For example, suppose you have a database that you periodically like to muck around with. If you wanted to, you could write an elaborate script that includes each and every analysis you might ever want to run on that data. Alternatively, you could write a very simple little script that merely connects to the database and returns the data (stored in a variable). If you dot source that script on startup you can then sit at the command prompt and muck around with the data all you want. That’s because you have full access to the script variables and their values.

Note. OK, sure, this could cause you a few problems as well, especially if you tend to use the same variable names in all your scripts. But that’s OK; if you ever need to wipe out the variable $C just run the following command (note that, with the Remove-Variable cmdlet, we need to leave off the $ when indicating the variable to be removed):

Remove-Variable C

Play around with this a little bit and you’ll start to see how useful dot sourcing can be.

Running Scripts Without Starting Windows PowerShell

Customizing the Conosle

We realize that it’s been awhile, but way back at the start of this article we tried running a Windows PowerShell script by double-clicking a .PS1 file. That didn’t go quite the way we had hoped: instead of running the script all we managed to do was open the script file in Notepad. Interestingly enough, that’s the way it’s supposed to work: as a security measure you can’t start a PowerShell script by double-clicking a .PS1 file. So apparently that means that you do have to start PowerShell before you can run a PowerShell script.

In a somewhat roundabout way, that’s technically true. However, that doesn’t mean that you can’t start a PowerShell script from a shortcut or from the Run dialog box; likewise you can run a PowerShell script as a scheduled task. The secret? Instead of calling the script you need to call the PowerShell executable file, and then pass the script path as an argument to PowerShell.exe. For example, in the Run dialog box you might type a command like powershell.exe -noexit c:\scripts\test.ps1:

Running Scripts from the Run Dialog

There are actually three parts to this command:

  • Powershell.exe, the Windows PowerShell executable.
  • -noexit, an optional parameter that tells the PowerShell console to remain open after the script finishes. Like we said, this is optional: if we leave it out the script will still run. However, the console window will close the moment the script finishes, meaning we won’t have the chance to view any data that gets displayed to the screen.

    Incidentally, the -noexit parameter must immediately follow the call to the PowerShell executable. Otherwise the parameter will be ignored and the window will close anyway.

  • C:\Scripts\Test.ps1, the path to the script file.

What if the path to the script file contains blank spaces? In that case you need to do the ampersand trick we showed you earlier; in addition, you need to enclose the script path in single quote marks, like so:

powershell.exe -noexit &'c:\my scripts\test.ps1'

Strange, but true!

Note. Here’s an interesting variation on this same theme: instead of starting PowerShell and asking it to run a particular script you can start PowerShell and ask it to run a particular command. For example, typing the following in the Run dialog box not only starts PowerShell but also causes it to run the Get-ChildItem cmdlet against the folder C:\Scripts:

powershell.exe -noexit get-childitem c:\scripts

It’s possible to get even more elaborate when starting Windows PowerShell, but this will do for now. If you’d like more information on PowerShell startup options just type powershell.exe /? from either the Windows PowerShell or the Cmd.exe command prompt.

By the way, this is the same approach you need to use if you want to run a Windows PowerShell script as part of a logon script. You can’t simply assign a .PS1 file as a logon script; the operating system won’t know what to do with that. Instead, you’ll need to create a VBScript script that calls the PowerShell script:

Set objShell = CreateObject("Wscript.Shell")
objShell.Run("powershell.exe -noexit c:\scripts\test.ps1")

Assign this VBScript script as the logon script and everything should work just fine. (Assuming, of course, that you’ve installed Windows PowerShell on any computers where this logon script is going to run.)

See? That Wasn’t So Bad

Customizing the Conosle

Admittedly, running Windows PowerShell scripts might not be as straightforward and clear-cut as it could be. On the other hand, it won’t take you long to catch on, and you’ll soon be running PowerShell scripts with the best of them. Most important, you’ll also be able to say things like, “You know, you really ought to dot source that script when you run it.” If that doesn’t impress your colleagues then nothing will.

Windows PowerShell Owner’s Manual

Five Command Line Tools for Managing Group Policy

Five Command Line Tools for Managing Group Policy

Follow Our Daily Tips

• facebook.com/TechNetTips

• twitter.com/TechNetTips

• blogs.technet.com/tnmag

Here are five command line tools you should keep handy when managing Group Policy throughout your organization.

GPMC If you know anything about Group Policy, you probably know that GPMC is used to manage Active Directory-based Group Policy. GPMC provides a comprehensive set of Component Object Model (COM) interfaces that you can use to script many operations.

GPFIXUP This is used to resolve domain name dependencies in Group Policy objects and Group Policy links after a domain rename operation.

GPRESULT You can use this tool to see what policy is in effect and to troubleshoot policy problems.

GPUPDATE This lets you refresh Group Policy manually. Gpupdate replaces the SECEDIT /refreshpolicy tool that was available in Windows 2000. If you type gpupdate at a command prompt, both the Computer Configuration settings and the User Configuration settings in Group Policy will be refreshed on the local computer.

LDIFDE This tool is used to import and export directory information. You can use LDIFDE to help you perform advanced backup and recovery of policy settings that are stored outside of GPOs. Specifically, you can use this tool to back up and restore a large number of Windows Management Instrumentation (WMI) filters at one time.

Tip adapted from Windows Group Policy Administrator’s Pocket Consultant by William Stanek.

via Five Command Line Tools for Managing Group Policy.

Query in CMD for FSMO Roles

Question

1

Sign in to vote

Hello,

Is there some command in DOS or PowerShell to quickly determine where are FSMO roles in AD?

Thank you.

Monday, September 10, 2012 5:33 AM

Reply | Quote |

ChristianGomez19805 Points

Answers

2

Sign in to vote

Hi Christian!

Try this:

netdom query fsmo

Regards!

Pablo Ariel Di Loreto

IT Consultant

This posting is provided “AS IS” with no warranties and confers no rights! Always test ANY suggestion in a test environment before implementing!

Marked as answer by ChristianGomez1980 Monday, September 10, 2012 5:41 AM

Monday, September 10, 2012 5:37 AM

Reply | Quote |

Pablo Ariel Di LoretoAlgeiba IT (Partner) 4,340 Points

1

Sign in to vote

Christian!

Netdom is a command-line tool that you can use from CMD (and will work from PowerShell too).

Please see: http://technet.microsoft.com/en-us/library/cc835089(v=ws.10).aspx

Regards!

Pablo Ariel Di Loreto

IT Consultant

This posting is provided “AS IS” with no warranties and confers no rights! Always test ANY suggestion in a test environment before implementing!

Marked as answer by ChristianGomez1980 Monday, September 10, 2012 5:52 AM

Monday, September 10, 2012 5:48 AM

Reply | Quote |

Pablo Ariel Di LoretoAlgeiba IT (Partner) 4,340 Points

1

Sign in to vote

You can simply open cmd and execute the command netdom query fsmo this will list the FSMO role holder server.You can also check the same from GUI or ntdsutil.See below link how to check the same.

http://www.petri.co.il/determining_fsmo_role_holders.htm

FSMO Roles and PowerShell

FSMO Roles and PowerShell

Best Regards,

Sandesh Dubey.

MCSE|MCSA:Messaging|MCTS|MCITP:Enterprise Adminitrator | My Blog

Disclaimer: This posting is provided “AS IS” with no warranties or guarantees , and confers no rights.

via Query in CMD for FSMO Roles.

How to extend AD schema without replicating to other servers – itbl0b

Hi guys,

In this post I’m going to talk about a safer way to extend Active Directory Schema – if you have to.

Let me start by stating this – I’m in the business for quite some time, I’ve extended the schema many times – for Exchange upgrades, for Domain upgrades, for lync and etc…. each and every one of the upgrades was successful without any problems.

But, from time to time I get the question from clients – what if anything goes wrong? How can I be sure that the process is safe?

As you all probably know – extending the schema is irreversible! You can’t just undo this.

Many of you probably think that if anything goes wrong they can simply just do an Authoritative Restore of the Active Directory, and that will solve their problem. Wrong! Authoritative Restore does not restore Schema to an older version. It does, but it restores it with orphaned objects, and that means that other DC’s in the domain will just ignore that Schema Version. The proper way to restore an Active Directory Schema is by removing all Domain Controllers from the network, installing one from scratch, restoring the SystemState on that server and then running Authoritative Restore on that new DC. Then just installing new DC’s…

You can read more on the subject on that Technet page:

http://technet.microsoft.com/en-us/library/cc961934.aspx

to quote:

“Only the domain and configuration domain directory partitions can be marked as authoritative. The schema cannot be authoritatively restored because it might endanger data integrity. For example, if the schema was modified, and then objects of the new or modified classSchema object were created, subsequent authoritative restore might replace the new or modified classes causing serious data consistency problems.”

So the question is – how can I make sure that I have a way back if anything goes wrong?

Well in a case a client of mine wants to be 100% sure that he can revert the process, here’s what I do:

1. If I have more than one Domain Controllers, I take the DC that holds the Schema Master FSMO role, and disable outbound replication on him. To do it, simply run the following command:

Repadmin /Options *SchemaMasterName* +Disable_Outbound_Repl

2. If I have more than two Domain Controllers, additionally to disabling the outbound replication I also shutdown one of the DC’s.

Why do I do it?

1. If the Schema extension process went wrong, because I’ve disabled the outbound replication on that DC, other DC won’t get that Schema update. I will then remove that Domain Controller from the network, Seize the Schema Master role on one of the other DC’s and that’s it 

2. In the situation where I have more than two DC’s, additionally to be able to seize the role, I will also have the ability to completely remove the all Domain Controllers – but the one that was down. Since he was turned off, he didn’t get any replication, I can simply turn all DC’s of (and destroying them) and turn on that remaining DC and working from there.

If everything went well, and the extending of the Schema ended well (and it will!) I can simply remove the flag that disables the outbound replication by running the following command:

Repadmin /Options *SchemaMasterName* -Disable_Outbound_Repl

And making sure that all Domain Controllers are replicated by running the following command:

Repadmin /SyncAll /e /A

If I had a DC shutdown before the process I will only turn it on only after I made sure that the replication to the other DC’s went well – and in fact you can leave him off for a couple of days. Just to make Sure!

via How to extend AD schema without replicating to other servers – itbl0b.

Best Practices for Implementing Schema Updates or : How I Learned to Stop Worrying and Love the Forest Recovery

28 May 2012 6:02 PM

Note:  This is general best practice guidance for implementing schema extensions, not the testing of their functionality.  There may be some additional best practices around design and functionality of schema extensions that should be considered.  Understand that the implementation of a schema extension may well succeed, but the functionality around the extension may not behave as expected.

As with any change to the Active Directory infrastructure, the two primary concerns around implementing a schema extension are:

1. Have you tested it, so you can be reasonably sure it will behave as expected when implemented in production?

2. Do you have a roll-back plan?  And is it tested?

Digging into the details of each of these is where things get a little stickier.  However, having personally helped customers with dozens of schema updates, I can honestly say that staying within best practices isn’t that hard, and definitely makes implementation less risky and less stressful.

Have you tested your schema update, so you can be reasonably sure it will behave as expected when implemented in production?

The reason this question gets so sticky is that customers either don’t have a test environment, or they don’t have a test environment that reasonably reflects the production environment.  With respect to testing a schema extension, the best test environment is one that has an identical schema to the production environment.  How can you build and/or maintain a test environment that has a schema that is identical to production?

1. Maintain a test Active Directory environment.  On an ongoing basis, be sure to apply all schema extensions to your test environment that you do to your production environment.

2. Build a test Active Directory environment, then synchronize the schema to production.  Specifically:

a. Start by building the test environment to the same AD version as production.  That is, if all your production DCs are Windows Server 2003 or lower, make sure your test environment has a 2003 schema.  If the production schema has been extended to 2008 R2, apply the 2008 R2 schema extensions to your test environment.

b. Apply other any known production schema extensions to the test environment.  This includes things like Exchange, OCS, LYNC or SCCM.

c. Fellow PFE Ashley McGlone has a cool PowerShell script that will analyze your production schema for other extensions, to help you “remember” any other schema extensions.

d. AD LDS (formally known as ADAM) has an awesome schema analyzer tool that will compare two schemas, and prepare an ldif file so you can actually synchronize the schemas.  You should definitely use this tool to otherwise sync the schemas across your production and test environments.

3. Perform a Forest Recovery Test on your production forest.  (Please be sure you isolate your recovery environment when you test forest recovery).  Your recovered forest will most certainly have an identical schema to production.  Perform your schema update test on this recovered environment.

Typically people will shy away from #3 because it seems the hardest (and potentially most dangerous if you forget to fully isolate the recovered forest).  However, based on my experiences, I think #3 is the best option.  Why?  Because if forces you to do something you should be doing anyways (see the section below), and there is no doubt that the schema in your test/recovered environment will be the same as the schema in production.

Do you have a roll-back plan?  And is it tested?

There’s no delicate way of saying this, so I’m just going to say it:

The only supported/guaranteed way to roll back a schema change is a full forest recovery.

Thus, the best (only?) roll-back plan is a well-designed, documented and tested forest recovery plan.  I know it sounds harsh (and it is), but you must be prepared for forest recovery.  A couple points to make this otherwise bitter pill a bit easier to swallow:

1. You should have a documented and tested forest recovery plan anyways.  It’s a general best practice.  You’ve probably been ignoring it for a while, so if you’re serious about a roll-back plan for your schema update, now is the time to get serious about documenting and testing forest recovery plan.

2. It’s not as hard as it appears.  But it is very unforgiving in the details.  We’ve got a great whitepaper to help you through the details.

3. You can actually kill two birds with one stone here.  The forest recovery test will actually generate a great test environment for testing your schema extension (see option #3, above, for testing schema updates).

If you’ve avoided testing forest recovery this long, I expect you won’t go down without a fight.  Here are some of the “alternatives” I’ve heard people used for potential roll-back strategies:

1. Disable inbound/outbound replication on the schema master.  Then perform the schema update on the schema master.  Any badness is contained to the schema master.  If something goes bad, blow up the schema master and repair the rest of the forest (seize schema master on another DC and clean out the old schema master).

2. Shut down/stop replication on select DCs.  Do the schema upgrade, and if something goes bad, kill all the DCs that were on-line and may have potentially replicated the “badness”.    Light up the DCs that were offline and repair/restore your forest.

Typically, I don’t like to go down those rabbit-holes.  First, choosing one of those strategies still does not absolve you from needing a documented and tested forest recovery plan.  Second, either of those strategies requires a good bit of work in preparing and executing.  Failure to execute properly could be disastrous.  Third, if I’m upgrading the schema I like to make sure AD replication is healthy before, during and after the update.  Taking DCs offline, or isolating them, significantly impairs the ability to check health, you need to be on your toes to distinguish real errors from self-inflicted errors (caused by the isolation).  Finally, be aware that for some schema upgrades (ADPREP specifically), Microsoft recommends against disabling replication on the schema master. Also, check out another strong recommendation against isolation.

Thus, I would recommend investing your valuable resources in a forest recovery test, and a schema extension test (on the recovered forest).  After that, there’s not a lot of value in additional risk-mitigation strategies like schema master isolation.  If you’ve tested the schema extension and validated recovery you’ve done your due diligence, so know the odds are monumentally in your favor.  Schema extensions, especially Microsoft-packaged schema extensions, have a proven and well-tested track record.  And real-life examples of customers needing to perform a production forest-recovery are almost non-existent.

Put it all together and it’s really quite simple

Get yourself in the habit of preparing for all schema extensions with a one-two step.  First, test your forest recovery plans.  Second, test your schema extensions in your recovery environment and in any other test/non-production environments you may have. The first time you perform the exercise, be sure to document. Every subsequent time, be sure to review/update your documentation. You can them be confident that you’ve done everything possible to insure the schema extension goes off without a hitch.

Synchronize replication with all partners: Active Directory

Synchronize replication with all partners: Active Directory.

 

Synchronize replication with all partners

0 out of 1 rated this helpful – Rate this topic

Updated: June 8, 2005

Applies To: Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 with SP1, Windows Server 2003 with SP2

You can use this procedure to synchronize replication with all replication partners of a domain controller.

Administrative credentials

To perform this procedure, you must be a member of the Domain Admins group in the domain of the selected domain controller or the Enterprise Admins group in the forest, or you must have been delegated the appropriate authority. If you want to synchronize the configuration and schema directory partitions on a domain controller in a child domain, you must have Domain Admins credentials in the forest root domain or Enterprise Admins credentials in the forest.

To synchronize replication with all partners

  1. At a command prompt, type the following command, and then press ENTER:

    repadmin /syncall DCName /e /d /A /P /q

    Term Definition

    DCName

    The Domain Name System (DNS) name of the domain controller on which you want synchronize replication with all partners

    /e

    Enterprise; includes partners in all sites.

    /d

    Identifies servers by distinguished name in messages.

    /A

    All; synchronizes all directory partitions that are held on the home server.

    /P

    Pushes changes outward from the home server.

    /q

    Runs in quiet mode; suppresses callback messages.

  2. Check for replication errors in the output of the command in the previous step. If there are no errors, replication is successful. For replication to complete, any errors must be corrected.

See Also

Delete Failed DCs from Active Directory

Delete Failed DCs from Active Directory.

 

Delete Failed DCs from Active Directory

by Daniel Petri – January 8, 2009

How can I delete a failed Domain Controller object from Active Directory?

When you try to remove a domain controller from your Active Directory domain by using Dcpromo.exe and fail, or when you began to promote a member server to be a Domain Controller and failed (the reasons for your failure are not important for the scope of this article), you will be left with remains of the DCs object in the Active Directory. As part of a successful demotion process, the Dcpromo wizard removes the configuration data for the domain controller from Active Directory, but as noted above, a failed Dcpromo attempt might leave these objects in place.

The effects of leaving such remains inside the Active Directory may vary, but one thing is sure: Whenever you’ll try to re-install the server with the same computername and try to promote it to become a Domain Controller, you will fail because the Dcpromo process will still find the old object and therefore will refuse to re-create the objects for the new-old server.

In the event that the NTDS Settings object is not removed correctly you can use the Ntdsutil.exe utility to manually remove the NTDS Settings object.

If you give the new domain controller the same name as the failed computer, then you need perform only the first procedure to clean up metadata, which removes the NTDS Settings object of the failed domain controller. If you will give the new domain controller a different name, then you need to perform all three procedures: clean up metadata, remove the failed server object from the site, and remove the computer object from the domain controllers container.

You will need the following tool: Ntdsutil.exe, Active Directory Sites and Services, Active Directory Users and Computers.

Also, make sure that you use an account that is a member of the Enterprise Admins universal group.

Caution: Using the Ntdsutil utility incorrectly may result in partial or complete loss of Active Directory functionality.

To clean up metadata

  1. At the command line, type Ntdsutil and press ENTER.
C:\WINDOWS>ntdsutil
ntdsutil:
  1. At the Ntdsutil: prompt, type metadata cleanup and press Enter.
ntdsutil: metadata cleanup
metadata cleanup:
  1. At the metadata cleanup: prompt, type connections and press Enter.
metadata cleanup: connections
server connections:
  1. At the server connections: prompt, type connect to server <servername>, where <servername> is the domain controller (any functional domain controller in the same domain) from which you plan to clean up the metadata of the failed domain controller. Press Enter.
server connections: connect to server server100
Binding to server100 ...
Connected to server100 using credentials of locally logged on user.
server connections:

Note: Windows Server 2003 Service Pack 1 eliminates the need for the above step.

  1. Type quit and press Enter to return you to the metadata cleanup: prompt.
server connections: q
metadata cleanup:
  1. Type select operation target and press Enter.
metadata cleanup: Select operation target
select operation target:
  1. Type list domains and press Enter. This lists all domains in the forest with a number associated with each.
select operation target: list domains
Found 1 domain(s)
0 - DC=dpetri,DC=net
select operation target:
  1. Type select domain <number>, where <number> is the number corresponding to the domain in which the failed server was located. Press Enter.
select operation target: Select domain 0
No current site
Domain - DC=dpetri,DC=net
No current server
No current Naming Context
select operation target:
  1. Type list sites and press Enter.
select operation target: List sites
Found 1 site(s)
0 - CN=Default-First-Site-Name,CN=Sites,CN=Configuration,DC=dpetri,DC=net
select operation target:
  1. Type select site <number>, where <number> refers to the number of the site in which the domain controller was a member. Press Enter.
select operation target: Select site 0
Site - CN=Default-First-Site-Name,CN=Sites,CN=Configuration,DC=dpetri,DC=net
Domain - DC=dpetri,DC=net
No current server
No current Naming Context
select operation target:
  1. Type list servers in site and press Enter. This will list all servers in that site with a corresponding number.
select operation target: List servers in site
Found 2 server(s)
0 - CN=SERVER200,CN=Servers,CN=Default-First-Site-Name,CN=Sites,CN=Configuration,DC=dpetri,DC=net
1 - CN=SERVER100,CN=Servers,CN=Default-First-Site-Name,CN=Sites,CN=Configuration,DC=dpetri,DC=net
select operation target:
  1. Type select server <number> and press Enter, where <number> refers to the domain controller to be removed.
select operation target: Select server 0
Site - CN=Default-First-Site-Name,CN=Sites,CN=Configuration,DC=dpetri,DC=net
Domain - DC=dpetri,DC=net
Server - CN=SERVER200,CN=Servers,CN=Default-First-Site-Name,CN=Sites,CN=Configuration,DC=dpetri,DC=net
 DSA object - CN=NTDS Settings,CN=SERVER200,CN=Servers,CN=Default-First-Site-Name,CN=Sites,CN=Configuration,DC=dpetri,DC=net
 DNS host name - server200.dpetri.net
 Computer object - CN=SERVER200,OU=Domain Controllers,DC=dpetri,DC=net
No current Naming Context
select operation target:
  1. Type quit and press Enter. The Metadata cleanup menu is displayed.
select operation target: q
metadata cleanup:
  1. Type remove selected server and press Enter.

You will receive a warning message. Read it, and if you agree, press Yes.

metadata cleanup: Remove selected server
"CN=SERVER200,CN=Servers,CN=Default-First-Site-Name,CN=Sites,CN=Configuration,DC=dpetri,DC=net" removed from server "server100"
metadata cleanup:

At this point, Active Directory confirms that the domain controller was removed successfully. If you receive an error that the object could not be found, Active Directory might have already removed from the domain controller.

  1. Type quit, and press Enter until you return to the command prompt.

To remove the failed server object from the sites

  1. In Active Directory Sites and Services, expand the appropriate site.
  2. Delete the server object associated with the failed domain controller.

To remove the failed server object from the domain controllers container

  1. In Active Directory Users and Computers, expand the domain controllers container.
  2. Delete the computer object associated with the failed domain controller.

  1. Windows Server 2003 AD might display a new type of question window, asking you if you want to delete the server object without performing a DCPROMO operation (which, of course, you cannot perform, otherwise you wouldn’t be reading this article, would you…) Select “This DC is permanently offline…” and click on the Delete button.

  1. AD will display another confirmation window. If you’re sure that you want to delete the failed object, click Yes.

To remove the failed server object from DNS

  1. In the DNS snap-in, expand the zone that is related to the domain from where the server has been removed.
  2. Remove the CNAME record in the _msdcs.root domain of forest zone in DNS. You should also delete the HOSTNAME and other DNS records.

  1. If you have reverse lookup zones, also remove the server from these zones.

Other considerations

Also, consider the following:

  • If the removed domain controller was a global catalog server, evaluate whether application servers that pointed to the offline global catalog server must be pointed to a live global catalog server.
  • If the removed DC was a global catalog server, evaluate whether an additional global catalog must be promoted to the address site, the domain, or the forest global catalog load.
  • If the removed DC was a Flexible Single Master Operation (FSMO) role holder, relocate those roles to a live DC.
  • If the removed DC was a DNS server, update the DNS client configuration on all member workstations, member servers, and other DCs that might have used this DNS server for name resolution. If it is required, modify the DHCP scope to reflect the removal of the DNS server.
  • If the removed DC was a DNS server, update the Forwarder settings and the Delegation settings on any other DNS servers that might have pointed to the removed DC for name resolution.