Tuesday, 8 September 2015

Debugging Group Policy

Not entirely sure what is happening with the preferences you are setting in Group Policy?  You can enable Group Policy "Logging and Tracing" which should give you a better idea.

The settings can be found in Group Policy Editor, under:
Computer Configuration\Policies\Administrative Templates\System\Group Policy

Enable whichever policy settings you require

Reboot the machine and log on

Logs can be found in the following locations:
User trace %COMMONAPPDATA%\GroupPolicy\Preference\Trace\User.log
Computer trace %COMMONAPPDATA%\GroupPolicy\Preference\Trace\Computer.log
Planning trace %COMMONAPPDATA%\GroupPolicy\Preference\Trace\Planning.log

Domain Controller Replication

When setting up a domain, you really should check that all domain controllers are replicating successfully.  It is also useful to check this when troubleshooting domain related problems too, just in case a DC is out of sync.

At the command prompt, on the DC you are checking, type:
    repadmin /showrepl

That's it!
Obviously there is a bit more to this command.  For instance, to check the replication status of a different domain controller, you would use:
    repadmin /showrepl <servername> /u:<domain name>\<username> /pw:*

Sunday, 24 May 2015

Mounting Windows Shares in Ubuntu

In a mixed Windows/Ubuntu environment, it is often the case the you need to mount a Windows a share from within Ubuntu.  This can be done on an ad-hoc basis, or at every log on.

Whilst it is possible to use one line to mount a share: (//servername/sharename  /media/windowsshare  cifs  username=msusername,password=mspassword,iocharset=utf8,sec=ntlm  0  0), this is not recommended, since the username and password are available for all to view.  This may not be a problem for you, but this post will take the extra steps of masking those details.

  1. Install the CIFS Utilities
        sudo apt-get install cifs-utils
  2. Create a directory where the share will be mounted.  I personally like to create this mount in my home directory, but you can create it pretty much where ever you like:
        sudo mkdir /home/<UbuntuUserName>/Server
  3. Create a smbcredentials file:
        sudo gedit ~/.smbcredentials
  4. Add lines for the username and password (for the destination where the share is located):
        username=<Username>
        password=<Password>

  5. Edit the smbcredentials file to prevent changes:
        chmod 600 ~/.smbcredentials
  6. Edit the /etc/fstab file with root privileges and add the following line:
        //<Servername>/<Sharename> /home/<UbuntuUserName>\Server cifs credentials=/home/<UbuntuUserName>/.smbcredentials,iocharset=utf8,sec=ntlm 0 0
    Don't forget to save the file!
  7. This can then be tested by typing the following command:
        sudo mount -a
    If this correctly mounts the share, it should work the next time you log on.
  8. Note: In the latest versions of most file managers, these mounted shares are only available read only by default.  If like me, you are used to a Windows environment, this can be a little annoying.
    However, it is much more secure.  To write to the mounted share, simply open the file manager as root (e.g. sudo pcmanfm)

Note: the following "variables" are used in the above steps:
<Username>
    Username for accessing the remote share
<Password>
    Password for accessing remote share
<UbuntuUserName>
    Local Ubuntu username
<Servername>
    Server name or IP address of the server housing the remote share
<Sharename>
    Share name on the remote server

Installing Wireless in Ubuntu

When installing Ubuntu (and it's many variants) on older hardware, it is often the case that the wireless drivers do not work without some tinkering.  There are too many different cards out there to write a set of instructions on how to install every card, therefore I will focus on the steps required to install network cards on my old Dell laptops, all running a Broadcom network cards.

The steps below show the steps required to install Broadcom drivers on Ubuntu:

  1. Identify the installed hardware, by typing:
        lspci -vnn | grep Network
  2. On my Dell D620, this returned the following:
        Broadcom Corporation BCM4311 802.11b/g WLAN [14e4:4311] (rev 01)
    It is the BCM4311 part we are interested in.
  3. Remove the currently installed Broadcom drivers:
        sudo apt-get remove --purge bcmwl-kernel-source
  4. Update the software list:
        sudo apt-get update
  5. Install the correct firmware.  Since, the output in step #2 reported "BCM4311", I require the "b43" version of the firmware:
        sudo apt-get install firmware-b43-installer
  6. Reboot
Upon starting the machine, a list of available wireless networks should be available.


Thursday, 14 May 2015

Adding Services to Group Policies

By default, when you load up the services list in Group Policy editor, only the services running on that particular server are shown.  If you want to control services running on other machines, you need to add them.  Any service at all can be added, including ones from third party vendors.

Step 1: Export Settings


  1. Log on to the machine that runs the service you require
  2. Run secpol.msc
  3. Create a new template
  4. Within that template, navigate to "System Services" and all the services currently on that machine will be listed.
  5. Edit the service(s) that you require (note: ONLY edit there services)
  6. Save the template
  7. Copy that file to the machine where you run the GP Editor (usually a DC)

Step 2: Import Settings


  1. Load Group Policy Editor, and edit the policy that controls services you require
  2. Navigate to the ‘Security’ node, right-click, choose import and select the file exported above
  3. The service(s) should now appear in the policy, and can be modified just like any other service

Saturday, 25 April 2015

Active Directory Basics

There are very few hard and fast rules when it comes to Active Directory (AD), the whole purpose of it is to be as flexible as possible.  However, when designing an AD structure there are a few things that you should bear in mind:
  1. Keep it as simple as possible
  2. Create multiple AD sites if required
  3. Use multiple domain controllers and DNS servers
  4. Ensure there are enough Global Catalogue servers
  5. FSMO Roles
  6. Restrict who can administer the structure and schema
So, what do these headings mean?  Below they are explained in a little more detail:

Keep it as simple as possible

Don't overcomplicate matters!  Keep your AD structure simple, and design it in such a way that it aids administration of the system.  Do not let "management" get involved in its structure, they nearly always want it to mimic the organisational structure of a company, and this is often the least useful way or organising things.

Design your AD structure based on the following two main uses of an Organisational Unit (OU):
  • Configuring objects within an Organisational Unit
  • Delegating control of objects within an Organisational Unit
In my experience, the latter is an often underutilised aspect of AD planning and design.



Create Multiple AD sites if required

Whilst attempting to not overcomplicate matters, do not sacrifice functionality for simplicity.  If a more complicated Active Directory structure is required, then by all means create one.  If your network has WAN links, then these should be separate sites.


Use multiple domain controllers and DNS servers

Whether your Domain Controllers and DNS Servers are physical or virtual, ensure that they are dedicated to their role.  Do not be tempted to use a file server as a Domain Controller for example.
Using servers that have multiple roles tend to cause problems if (or more likely when) a restore of one of the components is required.  It is possible to place DHCP and DNS on the Domain Controllers, and whilst such a set up works quite well, I would always advocate having multiple separate DNS Servers, since Active Directory depends on this service so heavily.

A further consideration for the virtual world: If you have multiple virtual Domain Controllers and DNS Servers, do NOT host them on the same physical server if this can avoided.  At the very least, spread them across multiple hosts, and if possible, place them in different locations.


Ensure there are enough Global Catalogue servers

Assuming you have multiple Domain Controllers (and why wouldn't you?), make multiple servers a Global Catalogue server.  More importantly, if you have multiple sites, ensure that there is a Global Catalogue server at EACH site, otherwise clients will have to go over the WAN link to look up information from the Global Catalogue.


FSMO Roles

As the saying goes: "In Active Directory, all domain controllers are equal, but some are more equal than others".  Domain Controllers that host "Flexible Single Master Operations" (FSMO) roles are vital to the running of Active Directory.  If you have multiple domain controllers, spread the FMSO roles out amongst them, and ensure that any domain controller that hosts a FSMO role is backed up regularly - but you were doing that already right?!


Restrict who can administer the Structure and Schema

Determining whether the whole Active Directory structure can be managed by a single person, a team of people, or deciding to break the structure down into components/areas that people manage, is the first step.  Decide who will manage what, and assign permissions that ONLY allows them to do what is required (Delegating control of objects within an Organisational Unit).

Whilst not something that appears overly common in organisations, it is possible to edit the Active Directory Schema itself (usually this is for the additional of fields).  This should absolutely be restricted to one or two people.


Flexible Single Master Operation (FSMO) Role Placement

It is not the goal of this article to explain the FSMO roles, rather just to provide information on where they should be located.

In Windows Server Active Directory, there are 5 FSMO roles which can be hosted on any Domain Controller.  Each role sits on just one DC, and theoretically you can have all five roles on one DC, or one role on five DCs, however neither of these scenarios is considered best practice.

In small organisations, where cost is an issue, it is common to find only a single domain controller.
There is nothing wrong with this, except from a redundancy point of view.  If the DC fails, there is no standby DC to take over, therefore all domain tasks stop (possibly even the ability to log on depending on how the security is configured) until the DC is recovered from a backup.  Note: In this scenario, it is imperative that the DC is backed up.

Microsoft Best Practice is to split the roles as follows:

Forest Wide Roles:
Schema Master
Domain Naming Master

Domain Wide Roles:
Relative ID (RID) Master
PDC Emulator
Infrastructure Master*

The PDC emulator and the RID master should be on the same DC, if possible.
The Schema Master and Domain Naming Master should also be on the same DC.

*Infrastructure Master

The Infrastructure Master (IM) role is an interesting one, since depending on the complexity of the set-up, it may not be needed at all. General guidance (for legacy NT 4.0 related reasons) is to place the IM role on a non-global catalog server.  However, there are two things to consider before choosing the location of this role:

  1. Single Domain Forest:
    If a Forest has a single Active Directory domain, there are no phantoms.  As such, the Infrastructure Master has nothing to do, and can therefore be placed on any DC, regardless of whether or not it hosts the Global Catalog.

  2. Multi-Domain Forest:
    a) If a Forest has multiple domains, but EVERY DC hosts the Global Catalog, there are no phantoms, and again the Infrastructure Master has nothing to do.

    b) If a Forest has multiple domains, and only SOME Of the DCs host the Global Catalog, then the Infrastructure Master MUST reside on a DC that does NOT host the Global Catalogue. 


In my experience, the vast majority of set-ups fall into category 1 or 2a, and therefore the Infrastructure Master can sit wherever you want.

If you are not sure on which DC each of the 5 FSMO roles currently reside, run the following command on any Domain Controller: NetDOM /query FSMO

Tuesday, 14 April 2015

Changing the Product Key on Windows Server

Occasionally, it may be necessary to change the product key on a server.  For example, if you have built the server using a development product key and now want to licence it properly with a customers licence.

Once you have licenced the server however, the option to enter a product key goes away, and there is no obvious way to get it back.  You could rebuild the server from scratch, but this seems overkill.

Alternatively:

  • Open a command prompt and change working directory to system32 directory (or ensure the system32 directory is in the path)
  • Type slmgr.vbs -ckms (this will remove any KMS entry)
  • Type slmgr.vbs -upk (this will remove the current product key)
  • Type slmgr.vbs -ipk <product key> (where <product-key> is the new product key you want to use, and is in the following format: xxxxx-xxxxx-xxxxx-xxxxx-xxxxx)
  • Type slmgr.vbs -ato (for CORE servers.  This activates server, an Internet connection is required)
    Alternatively type slui which will open the usual activation GUI.

This process has been tested on Windows Server 2008, 2008 R2 and 2012 R2.

Wednesday, 4 March 2015

Who is logged on to the SQL Server?

This tiny script will tell you who is logged on to SQL Server, and which Database they are connected to. Simply run at an SQL Command Prompt, or within Management Studio:

SELECT DB_NAME(dbid) as DBName, 
       COUNT(dbid) as NumberOfConnections, 
       loginame as LoginName 
FROM sys.sysprocesses 
WHERE dbid > 0 
GROUP BY dbid, loginame
ORDER BY DBName


Wednesday, 25 February 2015

Creating Linux NTP Server

Linux, in this example Ubuntu, does not come with a built in NTP server, however it is a nice lightweight platform than can be used as an alternative to a Windows machine that would obviously have to be licenced, and the NTP service can easily be added assuming the machine has an Internet connection (it only needs this during the configuration phase

Personally, I like to use Ubuntu Server 10.04 (http://releases.ubuntu.com/), since this isn't burdened with the updated GUI interface that seems to require additional resources, and that is just not needed.
The setup is pretty much self-explanatory, simply ensure the machine has a network connection (preferably to the Internet) before starting.

Installing the NTP Software:

Type: sudo -s and then type the appropriate password
To get and install the NTP Software (you MUST have an Internet connection), type: apt-get install ntp

Edit the NTP Configuration File:

(This uses the "vi" editor, you can obviously use one of your choice, such as Nano)

Open the file for editing: vi /etc/ntp.conf
Press the ‘Insert’ key
Insert the following lines of code after “server ntp.ubuntu.com”
     server 0.uk.pool.ntp.org iburst
     server 1.uk.pool.ntp.org
     server 2.uk.pool.ntp.org
     server 3.uk.pool.ntp.org
     server 127.127.1.0
     fudge 127.127.1.0 stratum 10
     tos orphan x (where x is a stratum level between 4 and 10)
Press the ‘Esc’ key, then enter :wq and press enter (to save the file and quit the editor)

[Note: The "tos orphan x" line is only needed if you intend to use this NTP server as a definitive time source on the network, without connecting this server to the Internet.  In fact, if this is the case, the "server 0" to "server 3" lines can be safely deleted, I leave them in case the server is connected to the Internet at a later date]

Restart the NTP Server:

     /etc/init.d/ntp restart

Check Status:

You can check the status of the various NTP servers by typing: ntpq -c lpeer
(Note: This may take a while to update after first being initialised)

Changing IP Address:

Assuming that the machine was built with a connection to the Internet, it will likely have a DHCP assigned IP address.  This can be found by typing: ifconfig eth0

To set a static IP address:

Backup the current IP details: cp /etc/network/interfaces /etc/network/interfaces.backup
Edit the interfaces file: vi /etc/network/interfaces
Press the ‘Insert’ key
Change 'iface eth0 inet dhcp' to 'iface eth0 inet static'
Enter the following lines (substituting in the desired values for the IP addresses):
     address 192.168.0.10
     netmask 255.255.255.0
     gateway 192.168.0.1 (This line can be omitted if not needed)
Press the ‘Esc’ key, then enter :wq and press enter (to save the file and quit the editor)
Type: ifdown eth0, then ifup eth0 to restart the network interface
Check that the new static IP has been assigned by once again typing: ifconfig

Reconfiguring the NTP Server after Cloning or Building from a Template:

When an virtual NTP is cloned or imported from a template, the “Ethernet 0” card is often unavailable, and the interface on the new server is named “Ethernet 1”, this can be checked by typing ifconfig eth1

If this is the case, edit the Interfaces file as follows:
Edit the interfaces file: sudo vi /etc/network/interfaces
Change the “Address”, “Netmask” and “Gateway” entries to reflect the IP addresses for the network.
Press the ‘Esc’ key, then enter :wq and press enter (to save the file and quit the editor), then type ifup eth1 to bring up the interface.
Check that the new static IP has been assigned by once again typing: ifconfig

Tuesday, 24 February 2015

Domain Kerberos Error

This post describes how to resolve the following Kerberos error:
"The trust relationship between this workstation and the primary domain failed"

Although the error message states "workstation", the exact same error message will be seen on Windows Servers too.

Although this error can be seen for a variety of reasons, it is typically found when a domain connected machine is restored to a previous point in time via third party tools (i.e. not Windows restore), or when using snapshots in a virtual environment.  It occurs because the computer's account has become mismatched with the one on the domain controller.

One could simply remove the machine from the domain and re-add it, however this can be a pain, especially if you have more than one machine with the error.  The other option is to reset the computer account, so that it is in sync with the one on the domain controller.

To reset the account
  1. Log on to the affected machine using the local administrator account
  2. Open PowerShell
  3. Run the following command:
    Test-ComputerSecureChannel -Repair -Credential (Get-Credential) -VerboseWhen prompted, enter the credentials of a user that has Domain Admin permissions.
  4. The account will be reset and PowerShell report a successful repair.
  5. The connection can be tested by running the following command:
    Test-ComputerSecureChannel -Verbose
    This command should report that the secure channel is in good condition.
You can now log off and back on as a domain user.

Monday, 23 February 2015

SQL DDL Triggers

This post covers the details of creating a Data Definition Language (DDL) trigger, which are used to perform administrative tasks within the database or on the server itself.  For a useful jumping off point regarding DDL Triggers, see this TechNet article.

I have used such triggers in the past for forcibly logging a user off the system, when they attempt to access SQL server through the Management Studio.  Typically this is when a username and password that is used in an application is well known, and I only want that application to be able to connect, not a user via the Management Console.  Such a scenario would be better controlled via user permissions, however this is not always possible.

The following code is used to create a trigger (called “TR_LOGON_APP”).  This trigger occurs after the user has logged on, but before the session is created.  If the user is connecting with the Management Studio, an error is thrown, and the “This login is for application use only” error is entered into the SQL Log.

To create such a trigger:
  • Run “sqlcmd” at a command prompt
  • Copy and paste the code into the SQL command window
    [Note: Change the <username> section of the code to reflect the user you want to prevent logging on]

CREATE TRIGGER [TR_LOGON_APP]
ON ALL SERVER
FOR LOGON
AS
BEGIN

        DECLARE @program_name nvarchar(128)
                DECLARE @host_name nvarchar(128)

                SELECT @program_name = program_name,
                                @host_name = host_name
                FROM sys.dm_exec_sessions AS c
                WHERE c.session_id = @@spid
                
                IF ORIGINAL_LOGIN() IN('<username>')
                                AND @program_name LIKE '%Management%Studio%'
                BEGIN
                                RAISERROR('This login is for application use only.',16,1)
                                ROLLBACK;
                END
        END;
  • Review the code, type GO, and press the enter/return key
  • To see any triggers that have been created, type the following at the SQL command prompt:
            SELECT * FROM sys.server_triggers
            GO
  • To delete a trigger, run the above command to list all DDL triggers, and make a note of the one you want to delete, and then type the following at the SQL command prompt:
    DROP TRIGGER <trigger name> ON ALL SERVER
    GO

Show Hidden Devices (Windows)

There are a number of instances where devices that were once part of a system no longer show up, however nearly all of them result from the simple removal of hardware.

One would think that simply loading "Device Manager" and selecting "View" / "Show Hidden Devices" would be enough to view them, but alas no.

Thankfully the solution is really simple:
  1. Open a command prompt
  2. Type set devmgr_show_nonpresent_devices=1
  3. Type devmgmt.msc
  4. From within Device Manager, select "View" then "Show Hidden Devices"
  5. Browse the various categories of device for any that are greyed out.  Right-click on them, and select "uninstall" 
[Note: Do not uninstall devices that you are unsure about.  This could leave your system in a non-working state, although Windows should automatically scan for and re-install any missing devices, if specific software is required, this is not always possible]