Technology
Guide: Products
|
||
Related Topics Links |
Contents Glossary Redundant Array
of Independent Disks (RAID) |
|
Enterprise ServersFor large-scale, mission-critical applications, such as
high-volume OLTP, server consolidation, data warehousing, and decision
support, Enterprise Servers power many of the world's most demanding,
business-critical application environments. Most enterprise servers run their own version of Unix (or in some
cases their proprietary operating systems).
High-end vendors are focusing much attention on improving the
scalability of their systems through new multiprocessor architectures that
replace shared buses. The debut of
IBM’s 64-bit z/Architecture and z-series 900 system, which can operate
independently or as part of a cluster of servers with hundreds of processors,
moves in on territory previously reserved for supercomputers. A new breed of supercomputers based on
standard microprocessors has emerged to handle processor-intensive commercial
applications such as data warehousing as well as the engineering, scientific
and military applications that have been the mainstay of the supercomputer
market. System vendors must meet the uptime demands of such business-critical
applications by providing computing platforms that are highly reliable,
provide high availability, and is easy to diagnose and service. Common features include configurable
redundancy, mainframe-style partitioning, system recovery, improved system
recovery, “no-outage” servicing, sophisticated diagnostic support. IDC reports that worldwide revenues in the high-end server
market increased by 3.5% in 2000, reaching a total of US$11.4 billion. IBM led the market with 32.5%, while Sun,
in number two spot, almost doubled its high-end server revenues and market
share to 19.2%. Third and fourth ranking are Compaq and HP (now merged)
respectively. |
Leading Vendors
|
|
Mid-range ServersAt the
upper end of the mid-range server market, these servers are used to run
enterprise applications, such as ERP and CRM. Consequently, users demand a high degree of reliability from
these systems, and platform vendors have delivered it. However, mid-range servers are more
expensive to purchase than entry-level servers; users acquire greater
capability but at a high cost. In
2000, IDC reported renewed growth in this sector, due in part to enterprise need
for midrange servers to act as scalable database servers supporting hundreds
or even thousands of end users simultaneously. Mid-range
servers include servers using the RISC processors (e.g. Compaq AphaServer,
IBM RS/6000 or p-series, SGI Origin, Sun), high-end Intel-based
multiprocessing servers, as well as the popular IBM AS/400 (now called
eServer i-series 400). Many Intel-based servers have evolved beyond two-way and four-way symmetric multiprocessor boxes based on 32-bit microprocessors to 32-way servers capable of meeting higher-end, and in some cases, mainframe performance requirements. The launch of Intel’s 64-bit Itanium chip in 2001 has narrowed the feature gap between high-end, Intel-based computing and RISC systems. IBM’s eServer i-series 400 (previously AS/400) will continue to be a viable business platform because of a wide selection of packaged applications and a large installed based. However, as the performance and availability of applications for Unix, Linux and Windows 2000-based systems continue to grow, these platforms will make increasing inroads into the AS/400’s traditional markets. Many RISC-based servers and mid-range servers are acquiring more high-end and mainframe features such as sophisticated workload management and partitioning for running multiple versions of the operating system, or in some cases, running multiple operating systems simultaneously in different partitions. In 2000, IDC reports the top four vendors were IBM, HP, Sun and Compaq, generating more than 80% market share of the total midrange server revenues. IDC predicts that Intel-based servers will play a larger role in the midrange market, running Windows 2000. IDC forecasts that as these Intel-based servers move from the entry-level to the midrange sector, price competition will increase for vendors of leading Unix/RISC mid-range servers, including Sun, HP, IBM and Compaq. |
Leading Vendors
|
|
Entry-level ServersThese systems are based on the same architecture as PCs. Unlike desktop PCs, entry-level servers typically use higher-performance processors such as Pentium 4 or Pentium III Xeon processors optimised for servers and available in multiprocessor configurations. Vendors typically build these systems around Intel’s commodity motherboards, such as Intel SBT2 motherboard, featuring dual Pentium III processors and hot swapping capabilities. These servers typically run Windows 2000 or shrink-wrapped version of Unix, or Linux. Demand for these servers has increased with advent of n-tiered or three-tiered infrastructure architectures, which have replaced many conventional client/server implementations. The entry-level server category has broadened and increased in sophistication as Intel has continued to expand its Pentium 4 architectures and chip sets that enable four and eight-way multiprocessing. Entry-level servers now include a wide array of offerings ranging from single-function Internet appliances for Web caching to multiprocessor application servers in small form factors for high-density hosting facilities. |
Leading Vendors
|
|
Enterprise StorageAn enterprise storage solution must significantly simplify or
completely automate the mundane and repetitive tasks associated with storage
management. At a minimum, the solution must support a broad range of
server platforms and operating systems, including legacy servers that support
back-office operations, thus freeing the storage manager from compatibility
concerns. The solution must also provide scaleable capacity, performance,
and availability. Although every user may state they require the
highest level of availability or performance, not every user will be willing
to pay for it. Thus, the solution must provide the flexibility to deliver a
“class of service.” The solution must also offer self-management
capabilities, especially with respect to error recovery, data availability,
and performance management. Error correction and the ability to recover from
component failures are critical attributes of any enterprise storage solution.
For example, RAID
protection frees the storage manager from concerns over data loss resulting
from a hard disk drive failure. Automated load balancing reduces the
management requirements of storage administrators. Data replication and data movement capabilities are of increasing
significance. An enterprise storage solution facilitates the exchange of
information between disparate applications either through data replication,
data sharing, or data movement. In addition, the enterprise storage solution
must provide the ability to copy data between storage systems, both locally
and remotely, both synchronously and asynchronously, to
facilitate cost-effective application recovery and disaster tolerance.
Finally, the enterprise storage solution must provide the ability to deliver
near-instantaneous, transient copies of data to reduce application downtime
for tape backups, application testing, and other scheduled and unscheduled
activities that impinge on the ultimate goal of continuous application,
system, and data availability. |
Leading Vendors
|
|
Backup StorageTape drives make backup: ·
Fast - Speed is critical because your data is constantly growing
while the time available for backup is shrinking. Even the slowest tape drive
writes 1 MB per second and the fastest 30 MB per second - that means a 200 GB
backup can be completed in less than two hours. ·
Easy - Unlike other storage methods, tape drives offer a range of
media that allows you to back up all the data on a small to medium-sized
server - up to 200 GB - on a single cartridge. And tape backup captures your
system setup information, as well as your data, allowing you to restore your
entire system when disaster strikes. What's more, your software can schedule
backups to happen automatically at the time most convenient for you. ·
Reliable - When it comes to data protection, it's safety first.
Tape has proved itself a reliable medium, and tape drives themselves have
never been more reliable. Easily portable, tapes have the added advantage of
being simple to remove and store offsite, so keeping a disaster recovery copy
is less of a burden. ·
Affordable - Per gigabyte of storage, tape is the most
cost-effective way to store large amounts of data. The compact size of tape
cartridges also helps keep down your storage costs. |
Leading Vendors
|
|
|
Glossary |
|
Redundant Array of Independent Disks (RAID) RAID (redundant array of independent disks; originally redundant
array of inexpensive disks) is a way of storing the same data in
different places (thus, redundantly) on multiple hard disks. By placing data
on multiple disks, I/O operations can overlap in a balanced way, improving
performance. Since multiple disks increases the mean time between failure
(MTBF), storing data redundantly also increases fault-tolerance. A RAID appears to the operating system to be a single logical
hard disk. RAID employs the technique of striping, which involves
partitioning each drive's storage space into units ranging from a sector (512
bytes) up to several megabytes. The stripes of all the disks are interleaved
and addressed in order. In a single-user system where large records, such as medical or
other scientific images, are stored, the stripes are typically set up to be
small (perhaps 512 bytes) so that a single record spans all disks and can be
accessed quickly by reading all disks at the same time. In a multi-user system, better performance requires establishing
a stripe wide enough to hold the typical or maximum size record. This allows
overlapped disk I/O across drives. There are at least nine types of RAID plus a non-redundant array
(RAID-0):
|
||
|
Storage Area Network (SAN) A SAN, or storage area network, is a dedicated network that is separate from LANs and WANs. It generally serves to interconnect the storage-related resources that are connected to one or more servers. It is often characterised by its high interconnection data rates (Gigabits/sec) between member storage peripherals and by its highly scalable architecture. Though typically spoken of in terms of hardware, SANs very often include specialised software for their management, monitoring and configuration. SANs can provide many benefits.
Centralising data storage operations and their management is certainly one of
the chief reasons that SANs are being specified and deployed today.
Administrating all of the storage resources in high-growth and
mission-critical environments can be daunting and very expensive. SANs can
dramatically reduce the management costs and complexity of these environments
while providing significant technical advantages. Click here for more details on SAN. |
Home | Network | Security | Software | Solutions | Products
Updated on Sept 3, 2002
© Copyright 2002 Allan Low. All rights reserved. Reproduction of this Web Site, in whole or in part, in any form or medium without express written permission from the author is prohibited.