Transfer protocols enable data transfer between host
systems and their targets, such as storage devices. Transfer protocols find the
appropriate target device, translate application commands into commands
understood by the storage device, and move the data between the device and the
computer system’s memory.
Networking protocols define the way that information is
transferred among computers and target devices on the network.
Depending on the user application, data is transported either as files or as
low-level blocks. File-based data on networked Windows platforms is sent using
SMB/CIFS (Server Messenger Block/Common Internet File System) transfer
protocols, and on networked UNIX/Linux platforms using the NFS (Network File
System) transfer protocols. The family of SCSI (Small Computer System Interface)
protocols is used to transfer block level data for most storage interconnects.
While file transport can occur over existing company
intranets (Local Area Networks, or LANs), as well as over Metropolitan and Wide
Area Networks (MANs and WANs), transport of block data was initially limited to
short distances within the same building or, in many cases, the same physical
enclosure. This changed with the development of serial SCSI architecture, which
enabled rapid transfer of block data across distances as great as metropolitan
areas, making storage area networking (SAN) technology an effective (although
costly) solution for enterprise-sized businesses. For smaller
businesses—perhaps as much as 85% of the storage market,
Fibre Channel’s dedicated hardware and transfer protocol technologies have made
the solution too costly to implement.
The development of iSCSI and the February 2003
ratification of iSCSI transport protocol standards by the Internet Engineering
Task Force (IETF)
stands as the first challenge to the dominance of
Fibre Channel. The iSCSI protocol, which unifies the well established Transmission
Control Protocol/Internet Protocol (TCP/IP) networking protocol and the SCSI
storage protocol, defines the rules and processes for transmitting and
receiving block storage data over TCP/IP networks. Block transport of data will
be enabled without requiring expensive installation of proprietary wiring
technologies. Storage area networking (SAN)—a technology which solves many
problems of scalability, disaster protection, efficient backup, and data
protection—is enabled by iSCSI, making it available to midsize and small
businesses as well enterprise businesses. Moreover, using existing IP networks,
iSCSI enables the networking of storage over metropolitan or wide areas. Given
these and other benefits, iSCSI is now viewed as an enabling technology that
can help bring the benefits of sophisticated storage solutions to a broader
To use the iSCSI protocols and standards, drivers and
other components specific to each operating system had to be developed.
Microsoft began support for iSCSI on the Windows platform in 2000, and has
since focused on developing an integrated architecture to capitalize on
existing Windows functionality, ease of management, and strong security. In
partnership with the IETF, Microsoft has made contributions to the iSCSI
security draft (2001), and co-contributed to the Internet Storage Name Service (iSNS)
protocol, the mechanism by which the host system discovers storage devices on a
All data, whether sent from the application in file format
or as blocks, is ultimately stored in block format. Block-data transfer between
computers and storage devices is under control of a hardware device inside the
computer system known variously as the host bus adapter (HBA),
storage controller, or network adapter. Transfer occurs across a single cable
(bus), as in the case of direct attached storage, or across a network.
There are a number of dominant interconnect wiring
technologies, each with its associated transfer protocol, including Advanced
Technology Attachment (ATA), SCSI, Fibre Channel, and the newly ratified iSCSI
The type of interconnect used in a particular configuration depends on how
storage is attached to computers (whether internally or externally), whether or
not the storage devices are to be shared, and the distances over which the
sharing must reliably occur. The following sections briefly outline each of the
major interconnect technologies and explores the limitations of each.
Most desktops and laptops store data on
internal disk drives, using either parallel or serial ATA as the internal
storage interconnect. Parallel
ATA (also sometimes referred to as integrated drive electronics (IDE)) is the
dominant interconnect technology for internal storage, largely because of its
simplicity and ease of implementation. Despite these advantages, parallel ATA
has performance and distance limitations, and does not provide a standardized
mechanism for hot-plugging devices. Serial ATA promises to address these by
improving bandwidth and providing more reliable command queuing, while doubling
the length of the interconnect.
Direct Memory Access (DMA) allows transfer of
data from hard drive to memory without using the host CPU, and is the most
commonly used data transfer mode. Because the distance traveled is so short,
transfer of data is relatively fast (up to 133 MB/S).
Even with the improvements promised by serial
ATA, and its low cost, embedded storage has two
disadvantages: it scales poorly with storage growth, and it does not allow
The SCSI bus is comprised of the cabling and
connectors that directly attach a computer (usually a server) to multiple
devices, whether for storage or other functions. Storage devices include hard
disks and tape drives, as well as storage subsystems such as a redundant array
of inexpensive disks (RAID).
Parallel SCSI is the dominant wiring
technology over which the SCSI commands are transmitted, although the SCSI-3
standard has been further developed to include serial SCSI (see Fibre Channel
Devices are identified by a SCSI target ID or
address. Narrow SCSI has seven addresses available for use by devices; wide
SCSI doubles the width of the bus, and allows 15 addresses. Each SCSI address
can itself be subdivided, as might be necessary when attaching a RAID storage
subsystem. The storage subsystem sits at a single SCSI address, and the disks
inside the subsystem each receive a unique address by logically dividing each
physical SCSI address into sub-units, known as logical units, and identified by
logical unit numbers (LUN). Windows supports up to 256 LUNs at each SCSI target
Because there are multiple devices on the SCSI
bus, a process is required to ensure that the devices can obtain access to the
cable to transfer commands and data. Each SCSI ID is assigned a priority. The
device with the highest priority takes control of the bus when it needs to
transfer data, a strategy that can result in contention and a “lack of
fairness” to lower priority devices.
The SCSI protocol was designed as the
communication language between the SCSI controller and the storage devices. The
original SCSI bus was limited to 8-bit transfers (referred to as Narrow SCSI),
and transferred data at a rate of up to 5 megabytes/second (MB/s). The
SCSI-2 protocol standard introduced Fast SCSI (8 bits, 10 MB/s) and
Wide SCSI (16 bits, 20 MB/s), as well as the intelligence to support
command queuing (up to 256 commands per logical unit). The SCSI Parallel
Interface version 5 (one of many SCSI-3 specifications) is the current version
of the protocol, supporting a throughput of 320 MB/s. (The next speed
increase to 640 MB/s has now been dropped in favor of using a new form of
serialized SCSI known as Serial Attached SCSI (SAS); the allowable distances
for using parallel signals at this higher data rate would have been too short
to be useful.)
The two most critically limiting factors of
parallel SCSI are the restricted distance the device can be from the host, and
the number of devices that can be attached. In the first case, the copper SCSI
cabling cannot be more than 12 or 25 meters from the host (depending on
the signal level) without experiencing degradation in the quality of the
signal. In the second case, as more devices are added to the bus, the total
distance supported declines and the large pin count connectors become more
prone to failure.
Networking two or more computers facilitates the sharing
of both data and storage devices, and it does so over longer distances than are
enabled with the SCSI interconnect. NFS and SMB/CIFS were developed to
facilitate transmission of files across IP networks.
Transfer of block data across networks was not initially enabled using IP
network technologies. Instead, Fibre Channel was developed as the physical
medium, and the SCSI-3 protocols were adapted for use with storage devices on
Fibre Channel is a network solution that
eliminates the need for direct cabling between the host computer and the
storage device, thus allowing multiple computers to access and share the same
storage devices. Fibre Channel networks use dedicated HBAs to provide high
performance block IO transfers.
Fibre Channel is based on serial wiring
technology (and therefore does not have the electrical limitations associated
with parallel technologies). Despite the name, Fibre Channel wires can consist
of either copper or fiber optic cabling. Since copper interconnects are limited
in the distances they can extend (signal degradation and interference occurs
over long distances), they are best used for departmental configurations. Fiber
optic cabling and connectors allow transmission distances up to 10 km or
more, enabling data replication over a metropolitan area for the purposes of
In Fibre Channel loop networks, all the cables
directly connect to a hub, so devices must arbitrate for transmission control.
Fibre Channel fabrics are networks connected by switches, making arbitration
unnecessary. Both the computer and storage device can have one or more ports to
connect them to the network. Each port is assigned a unique port address, and
millions of such addresses are possible in a Fibre Channel fabric (126 in a
A number of protocols can be implemented over
Fibre Channel, including IP and SCSI-3. The Fibre Channel Protocol (FCP or the
later FCP-2) maps the SCSI-3 protocol standard to implement serial SCSI over Fibre
Fibre Channel supports 1 gigabit/second and
2 gigabit/second speeds, although 4 gigabit/second and 10 gigabit/second
devices are being developed. Fibre Channel transmission is highly reliable,
with error rates roughly corresponding to one error per 1-2 terabytes of
data (these errors are recovered as part of the protocol).
Security is not generally considered an issue
with Fibre Channel, since the storage networks are by nature sealed off from
outside access. Shared storage is kept “secure” internally by implementing
zoning and LUN access controls. Nevertheless, as bridging protocols allow
access to Fibre Channel networks and as Fibre Channel devices are shared across
departments or separate companies, security will become a greater concern.
Fibre Channel networks typically use dedicated
fiber optic cables, which, although they can extend up to 10 km (or
farther, if using specialized electro-optical components), remain a limitation
for businesses dispersed over geographic expanses wider than metropolitan
areas. Several protocols that allow Fibre Channel traffic to be transported
over IP networks will soon be standardized by the IETF and should help
alleviate this problem.
Although Fibre Channel networks are essentially
unlimited in the number of possible ports, the high cost of switch ports
($500-$4000/port) and the need for HBAs ($500-$2000/adapter) raises network
costs considerably. Additionally, Fibre Channel networks require considerable
expertise to configure correctly. Moreover, interoperability is elusive, and
training is scarce and expensive.
iSCSI uses company IP
networks to transfer block-level data between computer systems and storage
devices. Unlike Fibre Channel, iSCSI uses existing network infrastructure:
network adapters, network cabling, hubs, switches, routers, and supporting
software. The use of network adapters, rather than HBAs, allows transfer of
both SCSI block commands and normal messaging traffic. (This gives iSCSI an
advantage over Fibre Channel network configurations, which require use of both
HBAs and network adapters to accommodate both types of traffic. While this is
not a problem for large servers, thin servers can accommodate only a limited
number of interconnects.)
iSCSI is based on the
serial SCSI standards, and can operate over existing Category 5 (or higher)
copper Ethernet cable or fiber optic wiring.
iSCSI describes the
transport protocol for carrying SCSI commands over TCP/IP. TCP handles flow
control and facilitates reliable transmission of data to the recipient by
providing guaranteed in-order delivery of the data stream. IP is responsible
for routing packets to the destination network. Together these protocols ensure
that data is correctly transferred from requesting applications (initiators) to
Transmissions across Category 5 network
cabling are at speeds up to 1 gigabit or 10 gigabit per second. Error
rates on gigabit Ethernet are in the same low range as Fibre Channel.
The amount of time it takes to queue, transfer,
and process data across the network is referred to as latency. One of the
drawbacks of transmitting SCSI commands across a TCP/IP network is that latency
is higher than it is on Fibre Channel networks, in part because of the overhead
associated with TCP/IP protocols. Additionally, many currently deployed
Ethernet switches were not designed with the low latency specifications
associated with Fibre Channel. Thus, although Ethernet cabling is capable of high
speeds, the actual speed of transmission may be lower than expected,
particularly during periods of network congestion.
The second concern about iSCSI transmission is
data integrity, both in terms of errors and security. Error handling is
addressed at each protocol level. Security such risks as tampering or snooping
as the data passes over networks can be handled by implementing the IP security
protocol (IPsec). Both of these measures are detailed in the iSCSI Basics
section later in this document.
Advantages and Deployment Scenarios
Block-storage over IP provides businesses with new
flexibility in their storage solutions. iSCSI
iSCSI enables SANs by connecting
storage to existing company network infrastructure. Small and midsize
organizations, previously priced out of the Fibre Channel solution, can now
afford SAN technologies. iSCSI or “SAN over IP” makes
the following SAN solutions widely available:
Storage. Multiple computer systems can access shared storage resources,
enabling highly effective storage use, maximum scalability, and equipment
Available Storage. With multiple connectors and the appropriate software
supporting multiple I/O paths between the SAN and servers, multiple failover
paths are supported if some aspect of the hardware fails.
Redundancy. Better data protection is achieved through disk mirroring to a
second storage box for fault tolerance, and replication to a remote device for
Attached Storage (NAS) with Backend SAN. NAS-SAN convergence is enabled
using a NAS box with an iSCSI device driver initiator that is attached to a
backend iSCSI SAN.
Using three business scenarios, one study
compared the total cost of ownership (TCO) to build Fibre Channel and iSCSI
storage networks from the ground up. In all cases, the TCO for iSCSI was lower
than for Fibre Channel. For companies that are considering expansion into
storage area networking and have a suitable pre-existing network infrastructure
(gigabit network adapters, cables, gigabit switches), the TCO is even lower for
iSCSI in comparison with Fibre Channel.
Moreover, iSCSI-based SANs capitalize on an existing staff
knowledge base of TCP/IP infrastructure and existing network management tools,
making it likely that they will place fewer demands on staff, especially as
compared to Fibre Channel networks.
iSCSI allows storage networks to
be created from existing network components, rather than having to add a second
network fabric type. This simplifies not only hardware configurations (since
Ethernet switches can be used), but also allows the use of existing security
methods, such as firewalls and IP security (which includes encryption,
authentication, and data integrity measures).
Sharing storage among multiple systems requires a method
for managing storage access so that systems access only the storage that is
assigned to them. In Fibre Channel networks this is done by assigning systems
to zones. In iSCSI, this can be done by using virtual LAN (VLAN) techniques. In
Fibre Channel, LUN masking must be used to provide finer granularity of storage
access; for iSCSI this is handled as part of the design by allowing targets to
be specific to individual hosts.
The use of IP traffic prioritization or Quality of Service
(QoS) can help ensure that storage traffic has the highest priority on the
network, which helps to alleviate latency issues.
iSCSI is not limited to the
metropolitan-wide areas to which Fibre Channel is limited. iSCSI
storage networks can be LANs, MANs, or WANs, essentially allowing global
distribution. iSCSI has the ability to eliminate the
conventional boundaries of storage networking, enabling businesses to access
data world-wide, and ensuring the most robust disaster protection possible. To
do this for Fibre Channel based SANs, it is necessary to introduce additional
protocol translations (such as Fibre Channel IP) and devices that provide this
capability on each end of the SAN links.
When multiple servers share access to the same storage, as
is done with Microsoft Cluster Service (MSCS) clusters, configuration of Fibre
Channel SANs can be very difficult—one improperly configured system impacts the
entire SAN. iSCSI clusters, unlike Fibre Channel
clusters, do not require complex configurations. Instead, iSCSI configuration
is easily accomplished as part of the iSCSI protocol, with little need for
intervention by system administrators. Changes introduced by hardware
replacement are largely transparent on iSCSI but are a major source of errors
on Fibre Channel implementations.
An iSCSI-based network consists of
1) the server and storage device end nodes, 2) either network interface cards
or HBAs with iSCSI over TCP/IP capability on the server, 3) storage devices
with iSCSI-enabled Ethernet connections, and in some cases, 4) iSCSI storage
switches and routers. Since most current SANs use Fibre Channel technology, multi-protocol switches (or
storage routers capable of translating iSCSI to Fibre Channel) must be used so
that iSCSI connected hosts can communicate with existing Fibre Channel devices.
Storage traffic is
commonly initiated by a host computer—the initiator—and received by the target
storage device. Since target devices can have multiple storage devices
associated with them (each one being a logical unit), the final destination of
the data is not the target per se, but specific logical units within the target.
The iSCSI protocol stack links SCSI commands for storage
and IP protocols for networking to provide an end-to-end protocol for
transporting commands and block-level data down through the host initiator
layers and up through the stack layers of the target storage devices. This
communication is fully bidirectional, as shown in Figure 1, where the arrows
indicate the communication path between the initiator and the target by means
of the network.
iSCSI Protocol Stack Layers
The initiator (usually a server) makes the application
requests. These are converted (by the SCSI class driver) to SCSI commands,
which are transported in command description blocks (CDBs). At the iSCSI
protocol layer, the SCSI CDBs (under control of the iSCSI device driver) are
packaged in a protocol data unit (PDU) which now carries additional information,
including the logical unit number of the destination device. The PDU is passed
on to TCP/IP. TCP encapsulates the PDU and passes it to IP, which adds the
routing address of the final device destination. Finally, the network layer
(typically Ethernet) adds information and sends the packet across the physical
network to the target storage device.
Additional PDUs are used for target responses and for the
actual data flow. Write requests are sent from the initiator to the target, and
are encapsulated by the initiator. Read requests are sent from the target to
the initiator, and the target does the encapsulation.
All initiator and target devices on the network must be
named with a unique identifier and assigned an address for access. iSCSI initiators and target nodes can either use an iSCSI
qualified name (IQN) or an enterprise unique identifier (EUI). Both types of
identifiers confer names that are permanent and globally unique.
Each node has an address consisting of the IP address, the
TCP port number, and either the IQN or EUI name. The IP address can be assigned
by using the same methods commonly employed on networks, such as Dynamic Host
Control Protocol (DHCP) or manual configuration.
Storage area networks can become very large and complex.
While pooled storage resources is a desirable
configuration, initiators must be able to determine both the storage resources
available on the network, and whether or not access to that storage is
permitted. A number of discovery methods are possible, and to some degree the
method used depends on the size and complexity of the SAN configuration.
Administrator Control. For simple SAN
configurations, the administrator can manually specify the target node name, IP
address, and port to the initiator and target devices. If any changes occur on
the SAN, the administrator must upgrade these names as well.
SendTargets. A second small storage
network solution is for the initiator to use the SendTargets operation to discover targets. The address of a target
portal is manually configured and the initiator establishes a discovery session
to perform the SendTargets command.
The target device responds by sending a complete list of additional targets
that are available to the initiator. This method is semi-automated, which means
that the administrator might still be required to enter a range of target
SLP. A third method is to use the Service
Location Protocol (SLP). Early versions of this protocol did not scale well to
large networks. In the attempt to rectify this limitation, a number of “agents”
were developed to help discover targets, making discovery management
Internet Storage Name Service (iSNS) is a relatively new device discovery
protocol (ratified by the IETF) that provides both naming and resource
discovery services for storage devices on the IP network. iSNS
builds upon both IP and Fibre Channel technologies.
The protocol uses an iSNS server as the
central location for tracking information about targets and initiators. The
server can run on any host, target, or initiator on the network. iSNS client software is required in each host initiator or
storage target device to enable communication with the server. In the
initiator, the iSNS client registers the initiator and queries the list of
targets. In the target, the iSNS client registers the target with the server.
iSNS provides the
Registration Service: allows initiators and targets to register and query
the iSNS server directory for information regarding initiator and target ID and
Network Zoning and Logon Control Service:
iSNS initiators can be restricted to zones so that they are prevented from
discovering target devices outside their discovery domains. This prevents
initiators from accessing storage devices that are not intended for their use.
Logon control allows targets to determine which initiators can access them.
State Change Notification Service: This
service allows iSNS to notify clients of changes in the network, such as the
addition or removal of targets, or changes in zoning membership. Only
initiators that are registered to receive notifications will get these packets,
reducing random broadcast traffic on the network.
From its inception, iSNS was designed to be scaleable,
working effectively in both centralized and distributed environments. Since
iSNS also supports Fibre Channel IP, configurations that link Fibre Channel and
iSCSI can use iSNS to get information from Fibre Channel networks as well. Hence,
iSNS can act as a unifying protocol for discovery.
For the initiator to transmit information to the target,
the initiator must first establish a session with the target through an iSCSI logon
process. This process starts the TCP/IP connection, verifies that the initiator
has access to the target (authentication), and allows negotiation of various
parameters including the type of security protocol to be used, and the maximum
data packet size. If the logon is successful, an ID is assigned to both
initiator (an initiator session ID, or ISID) and target (a target session ID, or
TSID). Thereafter, the full feature phase—which allows for reading and writing
of data—can begin. Multiple TCP connections can be established between each
initiator target pair, allowing unrelated transactions during one session.
Sessions between the initiator and its storage devices generally remain open,
but logging out is available as an option.
While iSCSI can be deployed over gigabit Ethernets, which
have low error rates, it is also designed to run over both standard IP networks
and WANs, which have higher error rates. WANs ,are
particularly error-prone since the possibility of errors increases with
distance and the number of devices the information must travel across. Errors
can occur at a number of levels, including the iSCSI session level (connection
to host lost), the TCP connection level (TCP connection lost), and the SCSI
level (loss or damage to PDU).
Error recovery is enabled through initiator and target
buffering of commands and responses. If the target does not acknowledge receipt
of the data because it was lost or corrupted, the buffered data can be resent
by the initiator, a target, or a switch.
iSCSI session recovery—necessary
if the connection to the target is lost due to network problems or protocol
errors—can be reestablished by the iSCSI initiator. The initiator attempts to
reconnect to the target, continuing until the connection is reestablished.
Since iSCSI operates in the Internet environment, security
is critically important. The IP protocol itself does not authenticate
legitimacy of the data source (sender), and it does not protect the transferred
data. ISCSI, therefore, requires strict measures to ensure security across IP
The iSCSI protocol specifies the use of IP security
(IPsec) to ensure that:
The communicating end points (initiator and
target) are authentic.
The transferred data has been secured through
encryption and is thus kept confidential.
Data integrity is maintained without
modification by a third party.
Data is not processed more than once, even if it
has been received multiple times.
(The Internet Key Exchange (IKE) protocol can assist with
key exchanges, a necessary part of the IPsec implementation.)
iSCSI also requires that the
Challenge Handshake Authentication Protocol (CHAP) be implemented to further
authenticate end node identities. Other optional authentication protocols
include Kerberos (such as the Windows implementation), which is a highly
Even though the standard requires that these protocols be implemented, there is no such
requirement to use them in an
iSCSI network. Before implementing iSCSI, a network administrator should review
the security measures to make sure that they are appropriate for the intended
use and configuration of the iSCSI storage network.
iSCSI will always have a higher performance
than NAS purely because NAS is a file transfer protocol and alway
has the burden of a file system in between, but iSCSI transfers data blocks to
raw disks. But both NAS and iSCSI can suffer some performance degradation comparing
to Fibre Channel because of TCP/IP overhead. This
problem can be solved by offloading the TCP/IP processing to a TCP/IP offload
engine (TOE) on a chip or HBA. These solutions are currently supported by a
number of vendors.
The Microsoft iSCSI initiator
package is designed to run on Windows 2000, Windows XP, and Windows
Server 2003. The package consists of several software components:
The optional iSCSI device driver component that is responsible for moving data
from the storage stack over to the standard network stack. This initiator is
only used when iSCSI traffic goes over standard network adapters, not when
specialized iSCSI adapters are used.
service. A service that manages all iSCSI initiators (including network
adapters and HBAs) on behalf of the OS. Its functions include aggregating
discovery information and managing security. It includes an iSNS client, the code required for
applications. The iSCSI command line interface (iSCSICLI),
property pages for device management, and a control panel application.
Initiator (Data Transfer)
Rather than simply
adding an iSCSI driver onto the existing operating system architecture, iSCSI
support was fully integrated with the OS. This integrated design facilitates
two types of driver/hardware implementations, either in the host software, or
in hardware. By providing support for multiple iSCSI implementations, Microsoft
provides businesses with maximum flexibility in choosing the solutions that fit
not only their pricing requirements, but also their network bandwidth needs and
CPU overhead constraints.
With this implementation, the complete iSCSI initiator
(the driver) is built into the operating system software and functions with the
network stack (iSCSI over TCP/IP). The Microsoft software initiator supports
both standard Ethernet network adapters and TCP/IP offloaded network adapters.
Microsoft support for standard Ethernet network adapters
enables businesses to take advantage of iSCSI technologies by using standard
off-the-shelf network adapter technology, without the need to purchase
specialized hardware. For businesses where high performance solutions are
critical, the Microsoft iSCSI initiator also supports the use of accelerated
network adapters to offload TCP overhead from the host processor to the network
initiator or driver can be implemented in the hardware adapter card, rather than
in the host software. This can be done by using an iSCSI HBA or a multifunction
offload device. In either case, the iSCSI processing is offloaded to the
hardware adapter card instead of processing the iSCSI protocol in the host
software. With both TCP and iSCSI processing on the adapter card, high-speed
transport of block data with minimal CPU overhead is possible. (The drawbacks to this approach are that
these high-performance solutions are more costly.)
iSCSI HBA drivers are
designed by third-party vendors to function with the Windows operating system.
Because this solution is dedicated to block transport only, additional adapters
are required for network traffic messaging. While this is not a problem for
large chassis, it can be a problem for thin servers.
iSCSI HBAs must
support iSNS discovery (and they can support any other discovery mechanism as
well.) The tightly integrated design of the initiator service with iSNS is
especially advantageous in HBA configurations (or with multifunction offload
devices) that do device discovery. Prior to the development of the iSCSI
initiator service, these adapters were able to discover targets but were unable—without
additional management application code—to report those targets to Windows. With
the Windows initiator service, all targets, irrespective of their method of
discovery, are consistently made known to the Windows OS. The service ensures
that multiple sessions to the same target, using different HBAs, are not
allowed, unless the session has a flag indicating that the proper multipathing
software is installed. The service also prevents unintended logging off of
initiators when a target is shared.
Security demands are somewhat less stringent
for iSCSI HBAs: they must support CHAP, but are not required to support IPsec
in the HBA network stack.
adapter cards can support more than one function. In this case, these devices
support both network adapter and iSCSI HBA functions. Of all the options, this provides
the most flexibility because it accelerates all IP traffic, offloads TCP/IP
traffic, and may offload iSCSI traffic as well. Again, the drawback with this
approach is the higher cost associated with the more complicated hardware and
As with iSCSI HBAs, multifunction offload
devices must support iSNS for discovery and are required to use the initiator
service (by using the WMI interface), ensuring that Windows maintains a
centralized system of discovery.
Security requirements are the same as for
iSCSI Initiator Service (Management)
The iSCSI initiator service was designed to enable uniform
storage management regardless of whether the iSCSI driver is implemented in
hardware or software. The initiator service provides streamlined storage
management for all aspects of the iSCSI service, including:
Discovery. Allows aggregation of multiple
discovery mechanisms (iSNS, SLP, SendTarget, and manual configuration by an
Security. Provides iSNS server and client
support for security credentials.
Session initiation and termination.
Provides parameter settings for iSCSI sessions.
Device management. Provides HBA or
network adapter-based initiators with the necessary parameters.
Additionally, a number of mechanisms can be used to
communicate with the initiator service, including the iSCSI Control Panel
applet, the iSCSICLI, iSCSI Property Pages, and Windows Management
Instrumentation (WMI). The control panel applet allows the administrator to do
the most common iSCSI operations by using a graphical user interface (GUI);
full functionality is provided by the scriptable command line tool. Property
pages provide a mechanism for accessing common information and configuring
common implementations, such as configuring IP addresses for HBAs. The WMI
interfaces enable management applications to obtain information about the iSCSI
initiator and to control the initiator. They also allow scripting of management
The initiator service enables the host computer system to
discover target storage devices on the storage area network and to determine
whether or not it has access to those devices. iSNS
client—the code required to interact with the iSNS server—is built directly
into the initiator service, enabling the initiator service to maintain a list
of targets reported via the iSNS server as changes are made.
The Windows iSCSI initiator service supports all four
storage target discovery mechanisms listed in the iSCSI Basics section of this
paper, including iSNS support for discovery domains, state change notification,
and IPsec information.
Additionally, the initiator service interfaces with Plug
and Play, allowing dynamic discovery of available storage targets. To prevent
unauthorized initiators, an administrator must authorize the use of a target;
enforcement of such authorization is under initiator service control. The
service also allows persistent logons, thus ensuring that targets are logged on
every time the operating system boots.
iSCSI initiator security was
designed for integration with the Windows security infrastructure. This design
enables customers to capitalize on both the core functionality of IPsec and
Windows security features, which include advanced support for IKE features and
Active Directory distribution of security information to each machine.
In accordance with iSCSI standards, IPsec is used for
encryption and CHAP for authentication. Key exchange for encrypted
communication is provided with the Windows Internet Key Exchange Security
features. The initiator service has a common API that can be used for
configuring both the software initiator and the iSCSI HBAs.
iSCSI, or Internet SCSI, is an
IP-based storage networking standard developed by the IETF. By enabling
block-based storage over IP networks, iSCSI enables storage management over
greater distances than Fibre Channel technologies, and at lower cost. The
Microsoft iSCSI initiator service helps to bring the advantages of high-end SAN
solutions to small and midsized businesses. The service enables a full range of
solutions, from low-end network, adapter-only implementations to high-end
offloaded configurations that can rival Fibre Channel solutions.