Saturday, August 30, 2008

10 Windows Server 2008 Netsh commands you should know


Taking a look at ten Netsh commands that every Windows administrator should know.
Introduction

I have written a number of different Netsh articles and other authors have published their own Netsh articles. This just shows how important and innovative Netsh really is. In this article, I will cover 10 Netsh commands that every Windows admin should know. In my opinion. Netsh is so powerful and flexible; I cannot choose the “most important” Netsh commands as the importance of a command will vary from admin to admin. What I can do is to choose the 10 commands that I feel will either show you valuable information or will help you out when you are in trouble. Keep in mind that these commands can be scripted (as they are all command line tools) so whatever you can do with just an individual command on a single machine, you could write a script to perform that command on all machines in your network. 
What is Netsh?

Microsoft Windows Netsh is a command line scripting utility. With Netsh, you can view or change the network configuration of your local computer or a remote computer. You can manually run Netsh commands or you can create batch files or scripts to automate the process. Not only can you run these commands on your local computer but also on remote computers, over the network.

Netsh also provides a scripting feature that allows you to run a group of commands in batch mode against a specified computer. With netsh, you can save a configuration script in a text file for archival purposes or to help you configure other computers.

Netsh is not “new” with Windows Server 2008 or Windows Vista. Netsh has been around for a long time. Netsh commands are available in Windows 2000, XP, and Windows Server 2003. What is new are a number of options for Netsh with Windows Server 2008 and Vista. Additionally, I feel that Netsh is underutilized by admins and most admins are not aware of the new Windows Server 2008 and Vista Netsh enhancements. It is my hope to educate Windows admins about the new netsh features and the power of netsh in this article.
What is different about Windows Server 2008 netsh vs. Windows XP?

There are a number of differences even at the core command level between the Windows XP version of netsh and the Windows Server 2008 netsh. To compare these, I ran “netsh /?” in each operating system. While Windows XP has “routing” listed as a context and Windows Server 2008 does not, that is the only context that Win 2008 lacks (and that is included in the Win 2008 RAS context). Otherwise, Windows Server 2008 has the following netsh context options available that Windows XP does not:
dhcp 
dhcpclient 
http 
ipsec 
lan 
nap 
netio 
rpc 
winhttp

Thus, as you can see, there are many more “context” / options available in Window Server 2008.

With no more delay, let’s get started with our Netsh top 10 Netsh commands that every admin should know.
#10 – How to get help

Every Windows admin should know how to get guided help with netsh. This is easy – just use the “/?” command to be guided through what you are trying to do. For example, to show all netsh contexts (categories of options), just type: netsh /?
Figure 1: Results of netsh /? help options

From there, you can select a context and be guided through configuring or showing options in that context. For example, say that I typed netsh lan /?, I would see:
Figure 2: Results of netsh lan /?

From there, I can continue with the guided help by doing-

netsh lan show /?

And, from there, I would see that I can show interfaces with-

netsh lan show interfaces

Being able to guide yourself through the many netsh commands using /? is a very valuable skill.
#9 – Supplying remote machine names and credentials

If you run netsh /? you will see that you can supply the remote machine name & IP address and credentials for the remote machine you will run netsh against. The options are “-r” for the machine, “-u” for the username, and “-p” for the password. Here is an example:

netsh -r WinXP-1 -u winxp-1\administrator -p My!Pass1 interface ip show config

As you can see, I supplied the remote machine name, remote username, and password which allowed me to perform this command over the network. You can perform any of the commands shown here over the network as long as the remote machine supports that command (different operating systems will use different variations of commands).
#8 – Run Netsh in interactive mode or with a script

Netsh can be run either interactively (just you typing commands manually) or when using scripting. Say that you wanted to manually step through some commands on your local machine or remote machine. You could just start by typing netsh at the command line and you would see: 

netsh>

From there, you can enter all the netsh commands you want, or even tell netsh to connect to a remote machine with set machine.

On the other hand, you could use netsh –f and specify a script that netsh would use.
#7 – Open a port on your firewall

With netsh, you can quickly and easily open a port on your firewall if you know the right command. Here is an example of opening port 445-

netsh firewall set portopening tcp 445 smb enable

If the command was successful, you should get a response of “Ok.”
#6 – Export your current network configuration to a file and import it

With netsh, exporting and importing your IP address configuration is easy – unlike in the GUI interface. To export your configuration, just do:

netsh –c interface dump > test.txt

Figure 3: Export of IP address configuration and viewing the file

Later on this machine or on a different machine, you could import this configuration with-

netsh –f test.txt
#5 – Try out the latest Netsh uses

As mentioned above, there are a lot of new features in Windows Server 2008 as it pertains to netsh.

Here are the new categories that I see on my Windows Server 2008 system:
dhcp 
dhcpclient 
http 
ipsec 
lan 
nap 
netio 
rpc 
winhttp

For example, you can configure not only your DHCP client but also your DHCP server. You can configure IPSec encryption, the network access protection (NAP) client, and many more!

As you add other roles & features to your server, you will have additional contexts available to you. For example, if you add the network policy server to Windows Server 2008, you will have “nps” as a net netsh context that can be configured.

For the official Microsoft Windows Server 2008 netsh documentation, see this URL:

Microsoft TechNet- Windows Server 2008 -Netsh Technical Reference
#4 – TCP/IP troubleshooting and interface resets

There are a number of things you can do with netsh to troubleshoot and reset your TCP/IP network interface. Here are some examples:
Reset all IP protocol stack configurations on your interface and send the output to a log file- netsh int ipv4 reset resetlog.txt 
Install the TCP/IP protocol- netsh int ipv4 install 
UnInstall the TCP/IP protocol- netsh int ipv4 uninstall
#3 – Configure the Windows Advanced Firewall

In my previous article, How to Configure Windows 2008 Advanced Firewall with the NETSH CLI, I discussed how you can now configure the new Windows advanced (bi-directional) firewall using the new advfirewall networking context settings using netsh in Windows Server 2008 and Windows Vista. Of course, you can also configure the traditional Windows firewall. Here are some examples:
Show all firewall rules - netsh advfirewall firewall show rule name=all 
Delete an inbound advanced firewall rule for port 21 - netsh advfirewall firewall delete name rule name=all protocol=tcp localport=21 
Export Windows Advanced Firewall settings - netsh advfirewall export “c:\advfirewall.wfw”

Perhaps the most common command you might use is the command to enable or disable your Windows firewall, like this:

netsh firewall set opmode disable

or

netsh firewall set opmode enable

However, for more specific information & examples, please see my article, above.
#2 – Configure Wireless Settings

In another article, Configuring Windows Server 2008 & Windows Vista Wireless connections from the CLI using netsh wlan, I discussed how you can now configure wireless networking context settings using netsh in Windows Server 2008 and Windows Vista. Here are some examples:
Connect to an already defined wireless network- netsh wlan connect ssid=”mySSID” name=”WLAN-Profil1” 
Show your current wireless settings - netsh wlan show settings 
Add an already exported wireless network profile - netsh wlan add profile filename="Wireless Network Connection-BOW.xml"


Wednesday, August 20, 2008

Troubleshooting Connectivity Problems on Windows Networks (Part 1)

This article series will explain various troubleshooting techniques that you can use when machines on a Windows network have difficulty communicating with each other.

Today’s network hardware and software is more reliable than ever but even so, things do occasionally go wrong. In this article series, I am going to discuss some troubleshooting techniques that you can use when a host on your Windows network has trouble communicating with other network hosts. For the sake of those with less experience in working with the TCP/IP protocol, I’m going to start with the basics, and then work toward the more advanced techniques.

Verify Network Connectivity

When one host has trouble communicating with another, the first thing that you must do is to gather some information about the problem. More specifically, you need to document the host’s configuration, find out if the host is having trouble communicating with any other machines on the network, and find out if the problem effects any other hosts.

For example, suppose that a workstation is having trouble communicating with a particular server. That in itself doesn’t really give you a lot to go on. However, if you were to dig a little bit deeper into the problem and found out that the workstation couldn’t communicate with any of the network servers, then you would know to check for a disconnected network cable, a bad switch port, or maybe a network configuration problem.

Likewise, if the workstation were able to communicate with some of the network servers, but not all of them, that too would give you a hint as to where to look for the problem. In that type of situation, you would probably want to check to see what the servers that could not be contacted had in common. Are they all on a common subnet? If so, then a routing problem is probably to blame.

If multiple workstations are having trouble communicating with a specific server, then the problem probably isn’t related to the workstations unless those workstations were recently reconfigured. More than likely, it is the server itself that is malfunctioning.

The point is that by starting out with a few basic tests, you can gain a lot of insight into the problem at hand. The tests that I am about to show you will rarely show you the cause of the problem, but they will help to narrow things down so that you will know where to begin the troubleshooting process.

PING

PING is probably the simplest TCP/IP diagnostic utility ever created, but the information that it can provide you with is invaluable. Simply put, PING tells you whether or not your workstation can communicate with another machine.

The first thing that I recommend doing is opening a Command Prompt window, and then entering the PING command, followed by the IP address of the machine that you are having trouble communicating with. When you do, the machine that you have specified should produce four replies, as shown in Figure A.

Figure A: The specified machine should generate four replies

The responses essentially tell you how long it took the specified machine to respond with thirty two bytes of data. For example, in Figure A, each of the four responses were received in less than four milliseconds.

Typically, when you issue the PING command, one of four things will happen, each of which has its own meaning.

The first thing that can happen is that the specified machine will produce four replies. This indicates that the workstation is able to communicate with the specified host at the TCP/IP level.

The second thing that can happen is that all four requests time out, as shown in Figure B. If you look at Figure A, you will notice that each response ends in TTL=128. TTL stands for Time To Live. What this means is that each of the four queries and responses must be completed within 128 milliseconds. The TTL is also decremented once for each hop on the way back. A hop occurs when a packet moves from one network to another. I will be talking a lot more about hops later on in this series.

Figure B: If all four requests time out, it could indicate a communications failure

At any rate, if all four requests have timed out, it means that the TTL expired before the reply was received. This can mean one of three things:

  • Communications problems are preventing packets from flowing between the two machines. This could be caused by a disconnected cable, a bad routing table, or a number of other issues.
  • Communications are occurring, but are too slow for PING to acknowledge. This can be caused by extreme network congestion, or by faulty network hardware or wiring.
  • Communications are functional, but a firewall is blocking ICMP traffic. PING will not work unless the destination machine’s firewall (and any firewalls between the two machines) allow ICMP echos.

A third thing that can happen when you enter the PING command is that some replies are received, while others time out. This can point to bad network cabling, faulty hardware, or extreme network congestion.

The fourth thing that can occur when pinging a host is that you receive an error similar to the one that is shown in Figure C.

Figure C: This type of error indicates that TCP/IP is not configured correctly

The PING: Transmit Failed error indicates that TCP/IP is not configured correctly on the machine on which you are trying to enter the PING command. This particular error is specific to Vista though. Older versions of Windows produce an error when TCP/IP is configured incorrectly, but the error message is “Destination Host Unreachable”

What if the PING is Successful?

Believe it or not, it is not uncommon for a ping to succeed, even though two machines are having trouble communicating with each other. If this happens, it means that the underlying network infrastructure is good, and that the machines are able to communicate at the TCP/IP level. Typically, this is good news, because it means that the problem that is occurring is not very serious.

If normal communications between two machines are failing, but the two machines can PING each other successfully (be sure to run the PING command from both machines), then there is something else that you can try. Rather than pinging the network host by IP address, try replacing the IP address with the host’s fully qualified domain name, as shown in Figure D.

Figure D: Try pinging the network host by its fully qualified domain name

If you are able to ping the machine by its IP address, but not by its fully qualified domain name, then you most likely have a DNS issue. The workstation may be configured to use the wrong DNS server, or the DNS server may not contain a host record for the machine that you are trying to ping.

If you look at Figure D, you can see that the machine’s IP address is listed just to the right of its fully qualified domain name. This proves that the machine was able to resolve the fully qualified domain name. Make sure that the IP address that the name was resolved to is correct. If you see a different IP address than the one that you expected, then you may have an incorrect DNS host record.





Wednesday, August 13, 2008

Troubleshooting Logon Problems

This article discusses some of the more common causes of logon failures in Active Directory environments.

Logging into a computer is such a routine part of the day that it is easy to not even think about the login process. Even so, things can and occasionally do go wrong when users log into Windows. In this article, I will talk about some of the things that can cause logon failures, and show you how to get around those problems.

Before I Begin

Before I get started, I just want to quickly mention that in order to provide as much useful information as possible, I am going to avoid talking about the most obvious causes of logon failures. This article assumes that before you begin the troubleshooting process, you have checked to make sure that the user is entering the correct password, the user's password has not expired, and that there are no basic communications problems between the workstation and the domain controller.

The System Clock

It may seem odd, but a workstation's clock can actually be the cause of a logon failure. If the clock is more than five minutes different from the time on your domain controllers, then the logon will fail.

In case you are wondering, the reason for this has to do with the Kerberos authentication protocol. At the beginning of the authentication process, the user enters their username and password. The workstation then sends a Kerberos Authentication Server Request to a the Key Distribution Server. This Kerberos Authentication Server Request contains several different pieces of information, including:

  • The user’s identification
  • The name of the service that the user is requesting (in this case it’s the Ticket Getting Service)
  • An authenticator that is encrypted with the user’s master key. The user’s master key is derived by encrypting the user’s password using a one way function.

When the Key Distribution Server receives the request, it looks up the user’s Active Directory account. It then calculates the user’s master key and uses it to decrypt the authenticator (also known as pre authentication data).

When the user’s workstation created the authenticator, it placed a time stamp within the encrypted file. Once the Key Distribution Server decrypts this file, it compares the time stamp to the current time on its own clock. If the time stamp and the current time are within five minutes of each other, then the Kerberos Authentication Server Request is assumed to be valid, and the authentication process continues. If the time stamp and the current time are more than five minutes apart, then Kerberos assumes that the request is a replay of a previously captured packet, and therefore denies the logon request. When this happens, the following message is displayed:

The system cannot log you on due to the following error: There is a time difference between the client and server. Please try again or consult your system administrator.

The solution to the problem is simple; just set the workstation’s clock to match the domain controller’s clock.

Global Catalog Server Failures

Another major cause of logon problems is a global catalog server failure. A global catalog server is a domain controller that has been configured to act as a global catalog server. Global catalog servers contain a searchable representation of every object in every domain of the entire forest.

When the forest is initially created, the first domain controller that you bring online is automatically configured to act as a global catalog server. The problem is that this server can become a single point of failure, because Windows does not automatically designate any other domain controllers to act as global catalog servers. If the global catalog server fails, then only domain administrators will be able to log into the Active Directory.

Given the global catalog server’s importance, you should work to prevent global catalog server failures. Fortunately, you can designate any or all of your domain controllers to act as global catalog servers. Keep in mind though that you should only configure all of your domain controllers to act as global catalog servers if your forest consists of a single domain. Having multiple global catalog servers is a good idea even for forests with multiple domains, but figuring out which domain controllers should act as global catalog servers is something of an art form. You can find Microsoft’s recommendations here.

If your global catalog server has already failed, and nobody can log in, then the best thing that you can do is work to return the global catalog server to a functional state. There is a way of allowing users to log in even though the global catalog server is down, but there are security risks associated with doing so.

If the Active Directory is running in native mode, then the global catalog server is responsible for checking user’s universal group memberships. If you choose to allow users to logon during the failure, then universal group memberships will not be checked. If you have assigned explicit denials to members of certain universal groups, then those denials will not be in effect until the global catalog server is brought back online.

If you decide that you must allow users to log on, then you will have to edit the registry on each of your domain controllers. Keep in mind that editing the registry is dangerous, and that making a mistake can destroy Windows. I therefore recommend making a full system backup before continuing.

With that said, open the Registry Editor and navigate through the registry tree to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa. Now, create a new DWORD value named IgnoreGCFailures, and set the value to 1. You will have to restart the domain controller after making this change.

DNS Server Failure

If you suddenly find that none of your users can log into the network, and your domain controllers and global catalog servers seem to be functional, then a DNS server failure might have occurred. The Active Directory is completely dependent on the DNS services.

The DNS server contains host records for each computer on your network. The computers on your network use these host records to resolve computer names to IP addresses. If a DNS server failure occurs, then host name resolution will also fail, eventually impacting the logon process.

There are two things that you need to know about DNS failures in regard to troubleshooting logon problems. First, the logon failures may not happen immediately. The Windows operating system maintains a DNS cache, which includes the results of previous DNS queries. This cache prevents workstations from flooding DNS servers with name resolution requests for the same objects over and over.

In many cases, workstations will have cached the IP addresses of domain controllers and global catalog servers. Even so, items in the DNS cache do eventually expire and will need to be refreshed. You will most likely start noticing logon problems when cached host records begin to expire.

The other thing that you need to know about DNS server failures is that often times there are plenty of other symptoms besides logon failures. Unless machines on your network are configured to use a secondary DNS server in the event that the primary DNS server fails, the entire Active Directory environment will eventually come to a grinding halt. Although there are exceptions, generally speaking, the absence of a DNS server on an Active Directory network basically amounts to a total communications breakdown.

Conclusion

Although I have discussed some of the major causes of logon failures on Active Directory networks, an important part of the troubleshooting process is to look at how widespread the problem is. For example, if only a single host on a large network is having logon problems, then you can probably rule out DNS or global catalog failures. If a DNS or a global catalog failure were to blame, then the problem would most likely be much more wide spread. If the problem is isolated to a single machine, then the problem is most likely related to the machine’s configuration, connectivity, or to the user’s account.


Tuesday, August 12, 2008

OSI Reference Model: Layer 1 hardware

A description of layer 1 of the OSI reference model and the hardware which relates to that layer.

The Open System Interconnect (OSI) reference model is a model, developed by the International Standards Organization (ISO), which describes how data from an application on one computer can be transferred to an application on another computer. The OSI reference model consists of seven conceptual layers which each specify different network functions. Each function of a network can be assigned to one, or perhaps a couple of adjacent layers, of these seven layers and is relatively independent of the other layers. This independence means that one layer does not need to be aware of what the implementation of an adjacent layer is, merely how to communicate with it. This is a major advantage of the OSI reference model and is one of the major reasons why it has become one of the most widely used architecture models for inter-computer communications.

The seven layers of the OSI reference model, as shown in Figure 1, are:

  • Application
  • Presentation
  • Session
  • Transport
  • Network
  • Data link
  • Physical

Figure 1: Diagram of the OSI reference model layers, courtesy of catalyst.washington.edu

Over the next few articles I will be discussing each layer of the model and the networking hardware which relates to that layer. This article, as you have probably guessed from the title, will discuss layer 1; the physical layer.

While many people may simply state that all networking hardware belongs exclusively in the physical layer, they are wrong. Many networking hardware devices can perform functions belonging to the higher layers as well. For example, a network router performs routing functions which belong in the network layer.

What does the physical layer include? Well, the physical layer involves the actual transmission of signals over a medium from one computer to another. This layer includes specifications for the electrical and mechanical characteristics such as: voltage levels, signal timing, data rate, maximum transmission length, and physical connectors, of networking equipment. For a device to operate solely in the physical layer, it will not have any knowledge of the data which it transmits. A physical layer device simply transmits or receives data.

There are four general functions which the physical layer is responsible for. These functions are:

  • Definitions of hardware specifications
  • Encoding and signaling
  • Data transmission and reception
  • Topology and physical network design

Definitions of hardware specifications

Each piece of hardware in a network will have numerous specifications. If you read my previous article titled Copper and Glass: A Guide to Network Cables [link this title to my previous article of that title], you will learn about some of the more common specifications which apply to network cables. These specifications include things like the maximum length of a cable, the width of the cable, the protection from electromagnetic interference, and even the flexibility.

Another area of hardware specifications are the physical connectors. This includes both the shape and size of the connectors as well as the pin count and layout, if appropriate.

Encoding and signaling

Encoding and signaling is a very important part of the physical layer. This process can get quite complicated. For example, let's look at Ethernet. Most people learn that signals are sent in '1's and '0's using a high voltage level and a low voltage level to represent the two states. While this is useful for some teaching purposes, it is not correct. Signals over Ethernet are sent using Manchester encoding. This means that '1's and '0's are transmitted as rises and falls in the signal. Let me explain.

If you were to send signals over a cable where a high voltage level represents a '1' and a low voltage signal represents a '0' the receiver would also need to know when to sample that signal. This is usually done with a separate clock signal being transmitted. This method is called a Non-return to Zero (NRZ) encoding, and has some serious drawbacks. First, if you do include a separate clock signal you are basically transmitting two signals and doubling the work. If you don't want to transmit the clock signal, you could include an internal clock in the receiver but this must be in near perfect synchronization with the transmitter clock. Let's assume you can synchronize the clocks, which becomes much harder as the transmission speed increases, there is still the problem of keeping this synchronization when there is a long stretch of the same bit being transmitted; it is the transitions which help synchronize the clocks.

The limitations of the NRZ encoding can be overcome by technology developed in the 1940s at the University of Manchester [link University of Manchester to http://www.manchester.ac.uk/], in Manchester, UK. Manchester encoding combines the clock signal with the data signal. While this does increase the bandwidth of the signal, it also makes the successful transmission of the data much easier and reliable.

A Manchester encoded signal, transmits data as a rising or falling edge. Which edge represents the '1' and which represents the '0' must be decided first, but both are considered Manchester encoded signals. Ethernet and IEEE standards use the rising edge as a logical '1'. The original Manchester encoding used the falling edge as a '1'.

One situation which you may be thinking about is that if you need to transmit two '1's in a row the signal will already be high when you need to transmit the second '1'. This isn't the case because the rising or falling edge which represents data is transmitted in the middle of the bit boundaries; the edge of the bit boundaries either contain a transition or do not, which puts the signal in the right position for the next bit to be transmitted. The end result is that at the center of every bit is a transition, the direction of the transition represents either a '1' or a '0' and the timing of the transition is the clock.

While there are many other encoding schemes, many of which are much more advanced than NRZ or Manchester encoding, the simplicity and reliability of Manchester encoding has kept it a valuable standard still widely in use.

Data transmission and reception

Whether the network medium is an electrical cable, an optical cable, or radio frequency, there needs to be equipment that physically transmits the signal. Likewise, there also needs to be equipment that receives the signal. In the case of a wireless network, this transmission and reception is done by highly designed antennas which transmit, or receive, signals at predefined frequencies with predefined bandwidths.

Optical transmission lines use equipment which can produce and receive pulses of light, the frequency of which is used to determine the logical value of the bit. Equipment such as amplifiers and repeaters, which are commonly employed in long-haul optical transmissions, are also included in the physical layer of the OSI reference model.

Topology and physical network design

The topology and design of your network is also included in the physical layer. Whether your network is a token ring [link token ring to http://en.wikipedia.org/wiki/Network_topology#Ring], star [link star to http://en.wikipedia.org/wiki/Network_topology#Star], mesh [link mesh to http://en.wikipedia.org/wiki/Network_topology#Mesh], or a hybrid topology [link hybrid topology to http://en.wikipedia.org/wiki/Network_topology#Hybrid_network_topologies], the decision of which topology to use was chosen with the physical layer in mind.

Also included in the physical layer is the layout of a high availability cluster, as described in my previous article titled High Assurance Strategies [link High Assurance Strategies to my previous article].

In general all you need to remember is that if a piece of hardware is not aware of the data being transmitted then it operates in the physical layer. In my next article I will discuss the Data link layer, what makes it different from it's adjacent layers and what hardware is included in it. As always, if you have any questions or comments on what I have written in this article feel free to send me an email.




OSI Reference Model: Layer 2 Hardware

A discussion of the second layer of the OSI reference model from a hardware perspective.

In my last article, I introduced the Open System Interconnect (OSI) reference model and discussed it's first layer; the Physical Layer. In this article I will discuss the second layer, the Data Link Layer, from a hardware perspective.

The data link layer provides functional and procedural methods of transferring data between two points. There are five general functions which the Data Link layer is responsible for. These functions are:

  • Logical Link Control
  • Media Access Control
  • Data Framing
  • Addressing
  • Error Detection

Logical Link Control

The Logical Link Control (LLC) is usually considered a sublayer of the Data Link layer (DLL), as opposed to a function of the DLL. This LLC sublayer is primarily concerned with multiplexing protocols to be sent over Media Access Control (MAC) sublayer. The LLC does this by splitting up the data to be sent into smaller frames and adding descriptive information to these frames, called headers.

Media Access Control

Like LLC, the Media Access Control (MAC) is considered a sublayer of the DLL, as opposed to a function of the DLL. Included in this sublayer is what is known as the MAC address. The MAC address provides this sublayer with a unique identifier so that each network access point can communicate with the network. The MAC sublayer is also responsible for the actual access to the network cable, or communication medium.

Data Framing

If one were to simply send data out onto the network medium not much would happen. The receiver has to know how, and when, to read the data. This can happen in a number of ways and is the sole purpose of framing. In general terms, framing organizes the data to be transferred and surrounds this data with descriptive information, called headers. What, and how much, information these headers contain is determined by the protocol used on the network, like Ethernet.

The structure of a frame adhering to the Ethernet protocol is shown below in Figure 1.

Figure 1: Structure of an Ethernet frame (Courtesy: Wikipedia)

Addressing

Addressing in layer 2 happens, as I mentioned earlier, with the MAC address of the MAC sublayer. It is very important not to confuse this with network or IP addressing. It can be helpful to associate the MAC address with a specific network access point and the network or IP address associated with an entire device (i.e. a computer, server, or router).

Speaking of routers, keep in mind that routers operate in layer 3, not layer 2. Switches and hubs do operate in layer two, and therefore direct data based on layer 2 addressing (MAC addresses) and are unaware of IP or network addressing. And, just so that I don't get an inbox filled with complaints ... yes I know... some routers also include layer 2 functionality. I will discuss routers with layer 2 functionality in another future article.

Error Detection and Handling

Whenever data is sent over any kind of transmission medium, there exists a chance that the data will not be received exactly as it was sent. This can be due to many factors including interference and, in the case of long transmissions, signal attenuation. So, how can a receiver know if the data received is error free? There are several methods that can be implemented to accomplish this. Some of these methods are simple and somewhat effective – others are complicated and very effective.

Parity bits are an example of an error detection protocol that is simple and, despite its limited effectiveness, its use is widespread. A parity bit, simply put, is an extra bit added to a message. There are two options for the value of this bit. Which value is chosen depends on the flavor of parity bit detection that is in use. These two flavors are even and odd parity detection. If even parity is in use, then the parity bit is set to the value ('1' or '0') to make the number of '1's in the message even. Likewise, if odd parity is in use the parity bit is set to the value needed to make the number of '1's in the message odd.

When using parity bit error detection the receiver will check all '1's in the frame, including the parity bit. The receiver will have a setting for even or odd parity; if the number of '1's in the frame does not match this setting, an error is detected. Now this is great, but as I mentioned earlier the effectiveness of this error detection method is limited. It is limited because if there is an even number of errors in the frame then the evenness or oddness of the number of '1's will be maintained and this method will fail to detect any errors – thus the need for a more rigorous error detection method.

A checksum error detection method can give us more rigor especially if used with a parity bit method. A checksum method, as its name suggests, will basically check the sum of all the '1's in a message and check that value against the checksum value added by the sender to the message. While a checksum method can provide more rigor to your error detection efforts, there are still limitations. For example, a simple checksum cannot detect an even number of errors which sum to zero, an insertion of bytes which sum to zero, or even the re-ordering of bytes in the message. While there are some more advanced implementations of the checksum method, including Fletcher's checksum method, I will discuss an even more rigorous method here.

One of the most rigorous methods of error detection is the cyclic redundancy check (CRC). What a CRC does is convert the message to a polynomial where the value of the coefficients correspond to the bits in the message and then divide that polynomial by a predetermined, or standard, polynomial called a key. The answer, more specifically the remainder part of the answer, is what is sent along with the message to the receiver. The receiver performs the same polynomial division with the same key and then checks the answer. If the answers match, then the chances are pretty good that there were no errors. I say pretty good because there are a lot of possible polynomials one could use for a key and not all polynomials provide equally good error detection. As a general rule, longer polynomials provide better error detection but the mathematics involved with this are quite complex and as with many aspects of technology there is some debate as to which implementations of this method provide the best error detection.

Lastly, I would like to point out that these error detection methods are not limited to transmissions of data over a network medium; they can be used equally well in a data storage scenario where one wants to check that the data has not been corrupted.

In my next article I will discuss layer 3 of the OSI model. I will also explain in a little more detail why routers (mostly) belong in the 3rd layer and not the 2nd. And as always, if you have any questions about this or any previous article, please do not hesitate to email me and I will do my best to answer any and all questions.



OSI Reference Model: Layer 3 Hardware

A discussion of the third layer of the OSI reference model, focusing mostly on routers and why they are usually placed in this layer.

In my last two articles I discussed the Open System Interconnect (OSI) reference model and its first two layers. In this article I will discuss the third layer; the network layer. The network layer is concerned with getting data from one computer to another. This is different from the data link layer (layer 2) because the data link layer is concerned with moving data from one device to another directly connected device. For example, the data link layer is responsible for getting data from the computer to the hub it is connected to, while the network layer is concerned with getting that same data all the way to another computer, possibly on the other side of the world.

The network layer moves data from one end point to another by implementing the following functions:

  • Addressing
  • Routing
  • Encapsulation
  • Fragmentation
  • Error handling
  • Congestion control

Addressing

Those who have read my previous article may be curious why layer 3 implements addressing when I also said that layer 2 implements addressing. To cure your curiosity, remember that I wrote that the layer 2 address (the MAC address) corresponds to a specific network access point as opposed to an address for an entire device like a computer. Something else to consider is that the layer 3 address is purely a logical address which is independent of any particular hardware; a MAC address is associated with particular hardware and hardware manufacturers.

An example of layer 3 addressing is the Internet Protocol (IP) addressing. An illustration of an IP address can be seen here in figure 1.

Figure 1: Illustration of an IP address (Source:Wikipedia.com)

Routing

It is the job of the network layer to move data from one point to its destination. To accomplish this, the network layer must be able to plan a route for the data to traverse. A combination of hardware and software routines accomplish this task known as routing. When a router receives a packet from a source it first needs to determine the destination address. It does this by removing the headers previously added by the data link layer and reading the address from the predetermined location within the packet as defined by the standard in use (for example, the IP standard).

Once the destination address is determined the router will check to see if the address is within its own network. If the address is within its own network the router will then send the packet down to the data link layer (conceptually speaking that is) which will add headers as I described in my previous article (link previous article to my OSI Layer 2 article) and will send the packet to its destination. If the address is not within the router's own network, the router will look up the address in a routing table. If the address is found within this routing table the router will read the corresponding destination network from the table and send the packet down to the data link layer and on to that destination network. If the address is not found in this routing table the packet will be sent for error handling. This is one source of errors which can be seen in data transmission across networks, and is an excellent example of why error checking and handling is required.

Encapsulation

When a router sends a packet down to the data link layer which then adds headers before transmitting the packet to its next point, this is an example of encapsulation for the data link layer.
Like the data link layer, the network layer is also responsible for encapsulating data it receives from the layer above it. In this case it would be from the data received from layer 4, the transport layer. Actually, every layer is responsible for encapsulating data it receives from the layer above it. Even the seventh and last layer, the application layer, because an application encapsulates data it receives from users.

Fragmentation

When the network layer sends data down to the data link layer it can sometimes run into trouble. That is, depending on what type of data link layer technology is in use the data may be too large. This requires the network layer have the ability to split the data up into smaller chunks which can each be sent to the data link layer in turn. This process is known as fragmentation.

Error handling

Error handling is an important aspect of the network layer. As I mentioned earlier, one source of errors is when routers do not find the destination address in their routing table. In that case, the router needs to generate a destination unreachable error. Another possible source of errors is the TTL (time to live) value of the packet. If the network layer determines that the TTL has reached a zero value, a time exceeded error is generated. Both the destination unreachable error and the time exceeded error messages conform to specific standards as defined in the Internet Control Message Protocol (ICMP).

Fragmentation can also cause errors. If the fragmentation process takes too long, the device can throw an ICMP time exceeded error.

Congestion control

Another responsibility of the network layer is congestion control. As I am sure you know, any given network device has an upper limit as to the amount of throughput the device can handle. This upper limit is always creeping upward but there are still times when there is just too much data for the device to handle. This is the motivation for congestion control.

There are many theories for how to best accomplish this, most of which are quite complicated and beyond the scope of this article. The basic idea of all of these methods is that you want to make the data senders compete for their messages to be the ones to get accepted into the throughput. The congested device wants to do this in a way that lowers the overall amount of data it is receiving. This can be accomplished by 'punishing' the senders which are sending the most data which causes the senders to 'slow' their sending activity to avoid the punishment and thereby reducing the amount of data seen by the congested device (which at this point is no longer congested).

Author's rant: The congestion control algorithms are quite complex for various reasons. Firstly, the mathematics involved is intense. So, for all of you who have ever wondered why people study mathematics in university and what job they could possibly get with that education.... this is an important one, and one that pays well with networking companies such as CISCO and Nortel. Secondly, after you have determined the proper mathematics to accomplish this task, how can it be implemented in a efficient and fast manner? This is the domain of engineers, who need to understand the mathematics, possible software implementation strategies, possible hardware implementation strategies, and design methodologies. Many people, including those who work in the tech industry, do not really understand what these, and other, professions bring to the table: they should. It is important.

In my next article I will discuss the fourth layer of the OSI reference model; the transport layer. Until then, as always, if you have any questions about this or any previous article please feel free to send me an email; I will do my best to answer any and all questions.


OSI Reference Model: Layer 4 Hardware

The previous articles in the series have discussed the first three layers of the OSI Reference Model. We will now discuss the fourth layer; the Transport layer.

The Transport layer provides the functionality to transfer data from one end point to another across a network. The Transport layer is responsible for flow control and error recovery. The upper layers of the OSI Reference Model see the Transport Layers as a reliable, network independent, end-to-end service. An end-to-end service within the transport layer is classified in one of five different levels of service; Transport Protocol (TP) class 0 through TP class 5.

TP class 0

TP class 0 is the most basic of the five classification levels. Services classified at this level perform segmentation and reassembly.

TP class 1

TP class 1 services perform all of the functions of those services classified at TP class 0 as well as error recovery. A service at this level will retransmit data units if they were not received by the intended recipient.

TP class 2

TP class 2 services perform all of the functions of those services classified at TP class 1 as well as multiplexing and demultiplexing, more on this below.
TP class 3

TP class 3 services perform all of the functions of those services classified at TP class 2 as well as sequencing of the data units to be sent.

TP class 4

TP class 4 services perform all of the functions of those services classified at TP class 3 as well as the ability to provide its services over either a connection oriented or connectionless network. This class of Transport Protocols is the most common and is very similar to the Transmission Control Protocol (TCP) of the Internet Protocol (IP) suite.
I say that TP class 4 is very similar to TCP because there are some key differences. TP class 4 uses 10 data types while TCP uses only one. This means that TCP is simpler but also means that it must contain many headers. TP class 4, while more complicated, can contain one quarter of the headers that TCP contains which obviously reduces a lot of overhead.

Connection oriented networks

Connection oriented networks are like your telephone. A connection is made before data is sent and is maintained throughout the entire process of sending data. With this type of network, routing information only needs to be sent while setting up the connection and not during data transmission. This reduces a lot of overhead which improves communication speed. This type of communication is also very good for applications, like voice or video communications, where the order of the data received is especially important.

Connectionless networks

Connectionless networks are the opposite of connection oriented networks, in that they do not set up a connection prior to sending data. Nor do they maintain any connection between two end points. This requires that routing information is sent with each packet, which therefore increases the communication overhead.
Keep in mind that just because data is being sent in packets does not mean that it is a connectionless network; virtual circuits are an example of a connection oriented network that use packets.
Since, in my previous articles, I have already covered aspects of error detection and recovery and since this article is focused on hardware I am going to give a basic introduction to a widely known (yet poorly understood) aspect of the Transport Layer; multiplexing and demultiplexing.

Multiplexing

Multiplexing (or muxing as it is often referred to) is one of those words that people often hear while not really understanding what it means. Many people may know that muxing is the process of combining two or more signals into one signal, but how exactly is that done? Well, there are multiple ways in which this can be done. Digital signals can be muxed in one of two ways, time-division multiplexing (TDM) and frequency division multiplexing (FDM). Optical signals use a method called wavelength-division multiplexing, although this is the same thing as FDM (wavelengths of course being inversely proportional to frequency).
To demonstrate how muxing works, let's take a simple case of TDM. In this example let's assume a two signal input. A two input muxing device will require three inputs; one for each of the signals and one for the control signal. A two input muxing device will also have one output. This device will alternate between the two input signals putting the resulting signal onto its output.

Figure 1: Logic gate schematic of a two input mux. Courtesy of www.cs.uiowa.edu Figure 1, above, shows a two input mux. The two signals are represented as d0 and d1 while the control signal is represented as c. The output, which is a function of the two inputs, is represented as f. The symbols in this figure are standard symbols for representing logic gates. Figure 2, shows the meaning of these three gates.

Figure 2: Basic logic gates. Courtesy of www.cs.uiowa.edu
The mux works by receiving a digital signal on the c input. This c signal goes directly to one input of the 1 'AND' gate, and to the 'NOT' gate. The 'NOT' gate inverts the signal and then sends it to one input of the 2 'AND' gate. The outputs of the 'AND' gates will only be high when the control signal and the input signal (d0 or d1) are high. Since the control signal is sent through a 'NOT' gate prior to reaching the 2 'AND' gate only one of the two 'AND' gates will see a high control signal at any one instant in time. This process means that f will alternate between being equal to d0 and then to d1 at the frequency of c.
Now you might be thinking "that's great, but who cares about getting half the signal". Well, that does not necessarily have to be the case. If the frequency of the control signal is at least twice the frequency of input signals, then the output f will contain enough information about both d0 and d1 that a demuxer will be able to reconstruct the original input signals. This is the core idea of the Nyquist-Shannon sampling theorem.
Looking at the logic gates in Figures 1 and 2 those of you with programming or scripting experience will recognize these logic functions as common tools in a programmer’s repertoire. Keep in mind that while these functions are found in software programs, I am strictly talking about hardware functions which are carried out with a series of transistors, acting as switches, arranges in clever ways to achieve these logic functions.

Demultiplexing

A demuxer is basically the opposite of a muxer. A demuxer will have one input signal, and in the case described above will have two output signals. A demuxer, of course, also has a control signal although with demuxers it is often called the addressing signals. This control signal is called an address signal because the demuxing circuit can also be used to simply choose which output pin to put the input signal on to.

In my next article I will discuss the fifth layer of the OSI Reference Model. Until then, and as always, if you have any questions about this or any other article of mine, do not hesitate to send me an email; I will do my best to get back to you.



Thursday, August 7, 2008

Removing The Last Exchange 2003 Server From Exchange 2007 (Part 1)

The steps required in order to remove the last Exchange 2003 server from an organization that has been migrated to Exchange 2007.

In a previous article here on MSExchange.org, I covered the process required to correctly remove the first Exchange 2003 server that had been installed into an administrative group. There were several steps required to achieve this without breaking some functionality. A similar process needs to be undertaken in order to remove the last Exchange 2003 server from an organization that has been migrated to Exchange 2007. Tucked away in the release notes of the Release To Manufacturing (RTM) version of Exchange 2007, Microsoft detailed this process. That process has now made its way to the main Exchange 2007 documentation. In this article we’ll look at this process and go through the items in it to see how it shapes up, including plenty of screen shots as usual here on MSExchange.org.

As is normally the case with Exchange, there are many different possible configurations available to demonstrate and so it’s always a challenge to pick a configuration which appeals to a wide audience but at the same time isn’t overly complicated. Therefore, for this article I’ve taken the approach where a single Exchange 2003 server is coexisting with a single Exchange 2007 server which will obviously fit many situations. For larger systems, the same principles can be applied.

It goes without saying that the first thing you need to make sure of is that all user mailboxes have been migrated to the Exchange 2007 server. I won’t be covering the process of doing this within this article as I’ve already done that in a separate article here on MSExchange.org. However, personally I’m a big fan of ensuring that the new Exchange 2007 server is handling the production load as soon as possible. That doesn’t just include migrating the user mailboxes to it as there are other important roles the Exchange 2007 server can perform from the moment it’s made a production server. The first and most obvious is the handling of Internet email and so the remainder of this article will cover setting this up before we move on to other tasks in the next part. If you’ve already configured Internet email to be routed via Exchange 2007 you can skip this article, but I thought it useful to detail this process for completeness.

Internet Email

Most Exchange 2003 systems have a simple SMTP Connector configured to handle Internet email and so that’s the example I’ll be using within this article. A typical example is an SMTP Connector that simply has an address space of * and a cost of 1, meaning that the connector handles all SMTP email for any external SMTP domain. As a result of the coexistence between Exchange 2003 and Exchange 2007, it’s possible to see the SMTP Connector within the Exchange Management Console running on the Exchange 2007 server, where it appears as a Send Connector as shown in Figure 1.

Figure 1: Exchange 2003 SMTP Connector

Logic may dictate that, since an SMTP Connector can have multiple source bridgehead servers in Exchange 2003, the Exchange 2007 server could be added as an additional bridgehead server in order to make the transition seamless. However, it’s not possible to add the Exchange 2007 server as an additional source bridgehead server via the Exchange System Manager or Exchange Management Console snap-ins as the Exchange 2003 and Exchange 2007 servers are in different routing groups. If you do try from the Exchange Management Console, you will get an error such as the one shown in Figure 2.

Figure 2: Exchange 2007 as Additional Bridgehead Server Error

Therefore, the correct way to ensure that all Internet email is handled by Exchange 2007 is simply to create a new Send Connector in Exchange 2007.

New Send Connector

Since the existing SMTP Connector has a cost of 1, it makes sense to raise this cost to, say, 10 before creating the new Send Connector. That way, the new Send Connector can be created with a cost of 1 meaning that it will be used in preference to the SMTP Connector. Of course, the alternative is to simply delete the SMTP Connector from Exchange System Manager once you’ve created the new Send Connector, thus ensuring that the only path available is via Exchange 2007. However, it’s always nice to leave old configurations in place until you are sure that the new configuration is working. Here are the steps required to create the new Send Connector:

  1. In the Exchange Management Console, expand Organization Configuration, click Hub Transport and then select the Send Connectors tab.
  2. Either right-click Hub Transport and choose New Send Connector… from the context menu, or choose the same option from the action pane.
  3. The New SMTP Send Connector wizard appears and consists of the following screens. I’ll briefly cover each screen and what you should enter.

    - First up is the Introduction screen. In the Name field, give this connector a suitable name such as Internet Email. It may help to distinguish this name from the name of any existing SMTP Connectors hosted on Exchange 2003. In the drop-down list used to configure the intended use of the connector, choose Internet. The completed screen is shown in Figure 3.

Figure 3: New SMTP Send Connector Introduction Screen
  1. Next is the Address Space screen where you simply need to click the Add button and in the resulting Add Address Space window, type the domain name to which you want to deliver Internet email. The most common domain name typed here is simply *, which represents all external domain names from your Exchange organization. Make sure the cost is lower than the cost of the Exchange 2003 SMTP Connector.
  2. The next screen is the Network Settings screen where you configure the Send Connector to either use DNS or a smart host to send Internet email. Here you’ll likely replicate the configuration of the Exchange 2003 SMTP Connector. In Figure 4 below I’ve used the IP address of a smart host which therefore assumes that the Exchange 2007 Edge Transport server role hasn’t been deployed. Don’t forget to ensure that your smart host allows connections from the Exchange 2007 server.

Figure 4: New SMTP Send Connector Network Settings Screen
  1. If you choose to route through a smart host the next screen that will be presented to you is the Configure smart host authentication settings screen. This allows you to specify any authentication options that your smart host may require such as basic or Exchange server authentication. In my case, no authentication is required so I just select None and progress to the next screen.
  2. Next the Source Server screen is presented as shown in Figure 5. Note that the Exchange 2007 server name is already populated in the list. In situations where you have more than one Hub Transport server, you can add additional servers via the Add button. The Source Server screen is shown in Figure 5.

Figure 5: New SMTP Send Connector Source Server Screen
  1. The penultimate screen is the New Connector screen that allows you to review your settings. Clicking the New button then proceeds to create the new SMTP Send Connector, the result of which is then displayed at the Completion screen. At this point, you’ve now created a new SMTP Send Connector to handle Internet email.

As I mentioned earlier, this connector will handle ALL external email for domains other than those configured on your local Exchange 2007 server. This may not always be desirable. For example, if you have a private network link to a partner organization, you can create an additional SMTP Send Connector and specify the partner SMTP domain name in the address space field. This would be a more explicit match than the general * domain and therefore you can control message flow to this domain. In the Network Settings screen of this connector, you’d likely specify a different IP address for a different smart host.







Removing The Last Exchange 2003 Server From Exchange 2007 (Part 2)

Configuring inbound Internet as well as moving the public folders and Offline Address Book generation to the Exchange 2007 server.

Introduction

In part one of this four-part article, we started the process of allowing the Exchange 2003 server to be removed by creating a new Send Connector on the Exchange 2007 server so that all outbound Internet email can be processed by Exchange 2007 rather than Exchange 2003. In part two of this article, we’ll look at some basic inbound Internet email considerations and then move swiftly on to the process of moving public folders to the new Exchange 2007 server, as well as ensuring that the Offline Address Book generation server is specified as the Exchange 2007 server.

Inbound Internet Email

The steps listed in part one of this article take care of outbound Internet email from your Exchange 2007 organization. For inbound Internet email, I’m making the assumption in the lab environment that the Exchange 2007 Edge Server role hasn’t been deployed and that you are using a 3rd party SMTP hygiene product to filter email before it is sent to your users. You’d obviously need to ensure that any smart host that processes inbound Internet email for your Exchange 2007 organization is configured to send these messages to the Exchange 2007 server and not the Exchange 2003 server. Specifically, the smart host sends messages to the Hub Transport server role. Be aware, though, that the default SMTP Receive Connector configured on an Exchange 2007 Hub Transport server does not allow anonymous connections by default which is required to accept Internet email in the case where no Edge Transport server is deployed. Note that the process is slightly different when the Edge Transport server role is used.

To modify the properties of your default receive connector on your Hub Transport server, do the following:

  1. Run the Exchange Management Console, navigate to Server Configuration and then click the Hub Transport object. In the result pane, there is only the Receive Connectors tab displayed which shows a list of receive connectors configured on this Hub Transport server.
  2. Bring up the properties of the default Receive Connector, in my case called Default E2K7, and go to the Permission Groups tab.
  3. On the Permission Groups tab, select the Anonymous users check box and then click OK to close the window and accept the configuration. This configuration is shown in Figure 6.

Figure 6: Default Receive Connector Anonymous Permissions

Public Folders

Although public folders are de-emphasized in Exchange 2007, they are still very much part of the product and there’s a fair chance that you are still using them at the moment within your Exchange infrastructure. Obviously you are going to need to migrate the data contained within the public folders over to Exchange 2007 and effectively the process is the same as if you were migrating the public folders to a different Exchange 2003 server.

Moving public folders is essentially a two-step process. The first step is to ensure that a replica of the public folder exists on the Exchange 2007 server whilst the second step is to remove the replica from the Exchange 2003 server. Fortunately, Microsoft has made the whole process really easy with two main options for us to use. First, there’s the Move All Replicas option in Exchange 2003 Service Pack 2, and second there’s the MoveAllReplicas.ps1 script provided with Exchange 2007. Let’s look at both options.

Exchange 2003 Service Pack 2 introduced a rather handy menu option that you will find on the properties of the Exchange 2003 public folder store. In Exchange System Manager running on your Exchange 2003 server, navigate down the hierarchy and locate the Exchange 2003 server. Expanding the server object, continue to navigate down underneath the relevant storage group object until you find the public folder database. Here, you can right-click the public folder database and you’ll see the Move All Replicas option as shown in Figure 7 below.


Figure 7: Move All Replicas Menu Option

This menu option will automatically move all public folders that are hosted on this public folder database to an alternative public folder database of your choice. Before we do that though, let’s confirm how many public folders we need to move. To do this, continue to expand the public folder database object in Exchange System Manager until you see the Public Folder Instances object. Selecting the Public Folder Instances object will show the instances of public folders that occur on this particular public folder database and you can see from Figure 8 that we have a small number of public folders to deal with. This includes both user public folders and additionally system public folders such as the Schedule+ Free/Busy folder.

Figure 8: Public Folder Instances

The goal with the migration of the public folders to the Exchange 2007 server is to end up with a Public Folder Instances object on Exchange 2003 that shows zero entries in the list, which can be accomplished via the Move All Replicas menu option for this example. However, the main thing to remember with regard to public folder replication and re-homing is patience, particularly in large environments. It could take several days to complete the replication and re-homing process in very large environments as there are many different factors to be taken into consideration. Later in this article we’ll look at removing the public folder database from the Exchange 2003 server. Just remember, do not proceed with the attempted removal of the Exchange 2003 public folder database or the actual server unless there are zero entries in the Public Folder Instances tab.

The Move All Replicas option itself is simple enough to follow. Once you choose the option, a Move All Replicas window will appear asking you to select the server to which you want the public folders moved. This is shown in Figure 9 where you can see that the server E2K7 is already highlighted since that’s the only other server running a public folder database.

Figure 9: Moving All Public Folder Replicas

Once you’ve chosen the relevant server and clicked OK, a warning prompt appears telling you that the process may take some time and to check the Public Folder Instances tab to confirm the process has been completed. Once you click OK to this warning, another window titled Propagating properties to subfolders will appear and will show the progress as the settings are applied. Once this window disappears, you now need to wait for the move to occur in the background. As I’ve said earlier, you need to wait until the Public Folder Instances is empty as shown below.

Figure 10: No More Public Folder Instances

The MoveAllReplicas script provided with Exchange 2007 is even easier to use. You will find this script in the \Program Files\Microsoft\Exchange Server\Scripts folder. From the Exchange 2007 server, run the Exchange Management Shell and then execute the following script:

MoveAllReplicas.ps1 –Server E2K3 –NewServer E2K7

As you can see there are only two parameters, namely Server, the source server, and NewServer, the target server. Once run successfully, the script doesn’t echo anything to the screen so once again, check the Public Folder Instances object on the Exchange 2003 server to confirm that no replicas are left on the Exchange 2003 server.

Offline Address Book

One of the components that you should have updated when removing the first Exchange 2003 server installed into an administrative group was the server responsible for generating the Offline Address List server. This is still a requirement when removing the last Exchange 2003 server from an Exchange 2007 environment since the server responsible for generating the Offline Address Book (note the name change for Exchange 2007) is likely to be the Exchange 2003 server. Here’s the process to do this using the Exchange Management Console:

  1. Run the Exchange Management Console.
  2. Select Organization Configuration and then select the Mailbox object. In the list of tabs displayed, click the Offline Address Book tab and you should see a screen similar to that shown in Figure 11. Note that the Generation Server column references the Exchange 2003 server name.

Figure 11: Offline Address Book Entry in Exchange Management Console
  1. Right-click the entry for the Default Offline Address List and choose the Move option from the context menu. This will bring up the Move Offline Address Book wizard window which consists of a single configuration screen.
  2. On the opening screen, click the Browse button and in the resulting Select Mailbox Server window, locate and choose the Exchange 2007 mailbox server.
  3. Back at the opening screen, ensure that the new Exchange 2007 server name is referenced in the Offline address book generation server field as shown in Figure 12.

Figure 12: Preparing to Move the OAB
  1. Once you are happy that the correct configuration has been selected, click the Move button. The Completion screen should then reveal that the move has been successful.

We’ll look at using the Exchange Management Shell to move the Offline Address Book in the next part of this article.











Comment Box is loading comments...