Loading


Content Type (x) > Black Box Explains (x)

Results 111-120 of 209 << < 11 12 13 14 15 > >> 

Black Box Explains...Insertion loss.

Insertion loss is a power loss that results from inserting a component into a previously continuous path or creating a splice in it. It is measured by the amount of... more/see it nowpower received before and after the insertion.

In copper cable, insertion loss measures electrical power lost from the beginning of the run to the end.

In fiber cable, insertion loss (also called optical loss) measures the amount of light lost from beginning to end. Light can be lost many ways: absorption, diffusion, scattering, dispersion, and more. It can also be from poor connections and splices in which the fibers don’t align properly.

Light loss is measured in decibels (dBs), which indicate relative power. A loss of 10 dB means a tenfold reduction in power.

Light strength can be measured with optical power meters, optical loss test sets, and other test sets that send a known light source through the fiber and measure its strength on the other end. collapse


Black Box Explains...DIN rail usage.

DIN rail is an industry-standard metal rail, usually installed inside an electrical enclosure, which serves as a mount for small electrical devices specially designed for use with DIN rails. These... more/see it nowdevices snap right onto the rails, sometimes requiring a set screw, and are then wired together.

Many different devices are available for mounting on DIN rails: terminal blocks, interface converters, media converter switches, repeaters, surge protectors, PLCs, fuses, or power supplies, just to name a few.

DIN rails are a space-saving way to accommodate components. And because DIN rail devices are so easy to install, replace, maintain, and inspect, this is an exceptionally convenient system that has become very popular in recent years.

A standard DIN rail is 35 mm wide with raised-lip edges, its dimensions outlined by the Deutsche Institut für Normung, a German standardization body. Rails are generally available in aluminum or steel and may be cut for installation. Depending on the requirements of the mounted components, the rail may need to be grounded. collapse


Black Box Explains…How to keep cabinets cool.

Networking equipment—especially servers—generates a lot of heat in a relatively small area. Today’s servers are smaller and have faster CPUs than ever. Because most of the power used by these... more/see it nowdevices is dissipated into the air as heat, they can really strain the cooling capacity of your data center. The components housed in a medium-sized data center can easily generate enough heat to heat a house in the dead of winter!

So cool you must, because when network components become hot, they're prone to failure and a shortened lifespan.

Damage caused by heat is not always immediately evident as a catastrophic meltdown—signs of heat damage include node crashes and hardware failures that can happen over a period of weeks or even months, leading to chronic downtime.

Computer rooms generally have special equipment such as high-capacity air conditioning and raised-floor cooling systems to meet their high cooling requirements. However, it's also important to ensure that individual cabinets used for network equipment provide adequate ventilation. Even if your data center is cool, the inside of a cabinet may overheat if air distribution is inadequate. Just cranking up the air conditioning is not the solution.

The temperature inside a cabinet is affected by many variables, including door perforations, cabinet size, and the types of components housed within the cabinet.

The most direct way to cool network equipment is to ensure adequate airflow. The goal is to ensure that every server, every router, every switch has the necessary amount of air no matter how high or low it is in the cabinet.

It takes a certain volume of air to cool a device to within its ideal temperature range. Equipment manufacturers provide very little guidance about how to do this; however, there are some very basic methods you can use to maximize the ventilation within your cabinets.

Open it up.
Most major server manufacturers recommend that the front and back cabinet doors have at least 63% open area for airflow. You can achieve this by either removing cabinet doors altogether or by buying cabinets that have perforated doors.

Because most servers, as well as other network devices, are equipped with internal fans, open or perforated doors may be the only ventilation you need as long as your data center has enough air conditioning to dissipate the heat load.

You may also want to choose cabinets with side panels to keep the air within each cabinet from mixing with hot air from an adjacent cabinet.

Equipment placement.
Don't overload the cabinet by trying to fit in too many servers—75% to 80% of capacity is about right. Leave at least 1U of space between rows of servers for front-to-back ventilation. Maintain at least a 1.5" clearance between equipment and the front and back of the cabinet. And finally, ensure all unused rack space is closed off with blank panels to prevent recirculation of warm air.

Fans and fan placement.
You can increase ventilation even more by installing fans to actively circulate air through cabinets. The most common cabinet fans are top-mounted fan panels that pull air from the bottom of the cabinet or through the doors. For spot cooling, use a fan or fan panel that mounts inside the cabinet.

For very tightly-packed cabinets, choose an enclosure blower—a specialized high-speed fan that mounts in the bottom of the cabinet to pull a column of cool air from the floor across the front of your servers or other equipment. An enclosure blower requires a solid or partially vented front door with adequate space—usually at least 4 inches—between the front of your equipment and the cabinet door for air movement.

When using fans to cool a cabinet, keep in mind that cooling the outside of a component doesn't necessarily cool its inside. The idea is to be sure that the air circulates where your equipment's air intake is. Also, beware of installing fans within the cabinets that work against the small fans in your equipment and overwhelm them.

Temperature monitoring.
To ensure that your components are operating within their approved temperature range, it’s important to monitor conditions within your cabinets.

The most direct method to monitor cabinet temperature is to put a thermometer into your cabinet and check it regularly. This simple and inexpensive method can work well for for small installations, but it does have its drawbacks—a cabinet thermometer can’t tell you what the temperature inside individual components is, it can’t raise the alarm if the temperature goes out of range, and it must be checked manually.

Another simple and inexpensive addition to a cabinet is a thermostat that automatically turns on a fan when the cabinet's temperature exceeds a predetermined limit.

Many network devices come with SNMP or IP-addressable internal temperature sensors to tell you what the internal temperature of the component is. This is the preferred temperature monitoring method because these sensors are inside your components where the temperature really counts. Plus you can monitor them from your desktop—they’ll send you an alert if there’s a problem.

There are also cabinet temperature sensors that can alert you over your network. These sensors are often built into another device such as a PDA but only monitor cabinet temperature, not the temperature inside individual devices. However, these sensors can be a valuable addition to your cooling plan, especially for older devices that don't have internal sensors.

The future of cabinet cooling.
Very high-density data centers filled with blade servers present an extreme cooling challenge, causing some IT managers to resort to liquid-cooled cabinets. They’re still fairly new and tend to make IT managers nervous at the prospect of liquids near electronics, but their high efficiency makes it likely that these liquid-cooled systems will become more prevalent.

It’s easy, really.
Keeping your data and server cabinets cool doesn't have to be complicated. Just remember not to overcrowd the cabinets, be sure to provide adequate ventilation, and always monitor conditions within your cabinets. collapse


Black Box Explains: M1 connectors.

In 2001, the Video Electronics Standards Association (VESA) approved the M1 Display Interface System for digital displays. The M1 system is a versatile and convenient system designed for computer displays,... more/see it nowspecifically digital projectors. M1 supports both analog and digital signals.

M1 is basically a modified DVI connector that can support DVI, VGA, USB and IEEE-1394 signals. The single connector replaces multiple connectors on projectors. An M1 cable can also be used to power accessories, such as interface cards for PDAs.

There are three primary types of M1 connectors:
–M1-DA (digital and analog). This is the most common connector, and it supports VGA, USB and DVI signals.
–M1-D (digital) supports DVI signals.
–M1-A (analog) supports VGA signals.

The M1 standard does not cover any signal specifications or detailed connector specifications. collapse


Black Box Explains…Liquid cooling.

The trend toward high-density installations with higher-powered CPUs has made heat a critical issue in data centers. Blade servers present a special challenge—a rack of blade servers can dissipate more... more/see it nowthan 25 kW, generating more heat than an electric oven.

Heat-generated problems
The heat generated in today’s high-density data centers can shorten equipment lifespan, negatively affect equipment performance, and cause downtime. Traditional air-cooling methods such as hot/cold aisle arrangements simply can’t keep up with these heat-generating installations. Data center managers often try to compensate for the inefficiency of air cooling by under-populating racks, but this wastes space—an often scarce commodity in modern data centers.

Why liquid
Because of the inherent inefficiencies of air cooling, many data centers have turned to liquid cooling through water or other refrigerants. Liquids have far greater heat transfer properties than air—water is 3400 times more efficient than air—and can cool far greater equipment densities.

Liquid cooling is usually done at the rack level using the airflow from the servers to move the heat to a cooling unit where it’s removed by liquid, neutralizing heat at the source before it enters the room. Liquid cooling may also be done at the component level, where cooling liquid is delivered directly to individual components. Liquid cooling may also arrive in the form of portable units for cooling hot spots.

Liquid cooling options
Types of liquid cooling commonly used in data centers include:

  • Cabinet-door liquid cooling: With this method, cooling units are special cabinet doors that contain sealed tubes filled with chilled liquid. The liquid is circulated through the door to remove heat vented by equipment fans. Because liquid-cooled doors can replace standard cabinet doors, they’re the favored method for retrofitting liquid cooling into existing data centers.
  • Integrated liquid cooling: This consists of a specialized sealed cabinet that has channels for liquid cooling built into it to act as heat exchangers. Fans move hot air past the heat exchangers before sending the cooled air back to the servers. These cabinets are closed systems that release very little heat into the room.
  • Component-based liquid cooling: Some servers are preconfigured with integrated liquid-based cooling modules. After the servers are installed, liquid is circulated through the cooling modules.
  • Immersion cooling: This rather counterintuitive cooling method immerses servers in a non-conductive liquid, which is circulated to cool the servers.
  • Portable liquid cooling: These are small units that operate by blowing air across water-cooled coils. They can usually accept water from any source—including a nearby faucet. They’re generally plumbed with ordinary garden hoses and require no special skills to use. Portable cooling units are intended for emergency cooling rather than as a permanent solution.


Liquid cooling requires a shift in the way you think about cooling. Installation may require that you acquire a new skill set or hire a professional installer. However, the space savings and cost savings gained through liquid cooling more than make up for the inconvenience of installing a new cooling technology.

Not only does liquid cooling enable data centers to operate at far greater densities than conventional air cooling does, it gets rid of the infrastructure associated with air cooling, enabling you to eliminate hot/cold aisles and raised floors. Liquid cooling can support from 25 to 80% more equipment in the same footprint, resulting in significantly lower infrastructure costs.

Add to this the fact that cooling is often the majority of a data center’s operating cost, and it’s plain to see why an investment in the efficiency of liquid cooling goes right to the bottom line. collapse


Black Box Explains... G.703.

G.703 is the ITU-T recommendation covering the 4-wire physical interface and digital signaling specification for transmission at 2.048 Mbps (E1). G.703 also includes specifications for U.S. 1.544-Mbps T1 but is... more/see it nowstill generally used to refer to the European 2.048-Mbps transmission interface. collapse


Black Box Explains...Types of KVM switches.

Black Box has the keyboard/video switches you need to share one CPU between several workstations or to control several CPUs from one monitor and keyboard.

If you do a lot of... more/see it nowswitching, you need premium switches—our top-of-the-line ServSwitch™ KVM switches give you the most reliable connections for the amount of KVM equipment supported. With ServSwitch KVM switches, you can manage as many CPUs as you want from just one workstation, and you can access any server in any computer room from any workstation. Eliminating needless equipment not only saves you money, it also gives you more space and less clutter. Plus, you can switch between PCs, Sun®, and Mac® CPUs. ServSwitch KVM switches can also cut your electricity and cooling costs because by sharing monitors, you use less power and generate less heat.

If your switching demands are very minor, you may not need products as advanced as ServSwitch. Black Box offers switches to fill less demanding needs. Most of these are manual switches or basic electronic switches, which don’t have the sophisticated emulation technology used by the ServSwitch.

For PCs with PS/2® keyboards, try our Keyboard/Video Switches. They send keyboard signals, so your CPUs boot up as though they each have their own keyboard.

With the RS/6000™ KVM Switch, you can run up to six RS/6000 servers from one workstation. Our Keyboard/ Video Switch for Mac enables you to control up to two Mac CPUs from one keyboard and monitor.

With BLACK BOX® KVM Switches, you can share a workstation with two or four CPUs. They’re available in IBM® PC and Sun Workstation® configurations.

You’ll also find that our long-life manual Keyboard/Video Switches are perfect for basic switching applications. collapse


Black Box Explains...Upgrading from VGA to DVI video.

Many new PCs no longer have traditional Cathode Ray Tube (CRT) computer monitors with a VGA interface. The latest high-end computers have Digital Flat Panels (DFPs) with a Digital Visual... more/see it nowInterface (DVI). Although most computers still have traditional monitors, the newer DFPs are coming on strong because flat-panel displays are not only slimmer and more attractive on the desktop, but they’re also capable of providing a much sharper, clearer image than a traditional CRT monitor.

The VGA interface was developed to support traditional CRT monitors. The DVI interface, on the other hand, is designed specifically for digital displays and supports the high resolution, the sharper image detail, and the brighter and truer colors achieved with DFPs.

Most flat-panel displays can be connected to a VGA interface, even though using this interface results in inferior video quality. VGA simply can’t support the image quality offered by a high-end digital monitor. Sadly, because a VGA connection is possible, many computer users connect their DFPs to VGA and never experience the stunning clarity their flat-panel monitors can provide.

It’s important to remember that for your new DFP display to work at its best, it must be connected to a DVI video interface. You should upgrade the video card in your PC when you buy your new video monitor. Your KVM switches should also support DVI if you plan to use them with DFPs. collapse


Black Box Explains... Manual switch chassis styles.

There are five manual switch chassis styles: three for standalone switches (Styles A, B, and C) and two for rackmount switches (Styles D and E). Below are the specifications for... more/see it noweach style.

Standalone Switches

Chassis Style A
Size — 2.5"H x 6"W x 6.3"D (6.4 x 15.2 x 16 cm
Weight — 1.5 lb. (0.7 kg)
Chassis Style B
Size — 3.5"H x 6"W x 6.3"D (8.9 x 15.2 x 16 cm)
Weight — 1.5 lb. (0.7 kg)
Chassis Style C
Size — 3.5"H x 17"W x 5.9"D (8.9 x 43.2 x 15 cm)
Weight — 8.4 lb. (3.8 kg)

Rackmount Switches

Chassis Style D (Mini Chassis)
Size — 3.5"H x 19"W x 5.9"D (8.9 x 48.3 x 15 cm)
Chassis Style E (Standard Chassis)
Size — 7"H x 19"W x 5.9"D (17.8 x 48.3 x 15 cm) collapse


Black Box Explains...T1 and E1.

If you manage a heavy-traffic data network and demand high bandwidth for high speeds, you need digital super-fast T1 or E1.

Both T1 and E1 are foundations of global communications. Developed... more/see it nowmore than 35 years ago and commercially available since 1983, T1 and E1 go virtually anywhere phone lines go, but they’re much faster. T1, used primarily in the U.S., sends data up to 1.544 Mbps; E1, used primarily in Europe, supports speeds to 2.048 Mbps. No matter where you need to connect—North, South, or Central America, Europe, or the Pacific Rim—T1 and E1 can get your data there fast!

T1 and E1 are versatile, too. Drive a private, point-to-point line; provide corporate access to the Internet; enable inbound access to your Web Server—even support a voice/data/fax/video WAN that extends halfway around the world! T1 and E1 are typically used for:
• Accessing public Frame Relay networks or Public Switched Telephone Networks (PSTNs) for voice or fax.
• Merging voice and data traffic. A single T1 or E1 line can support voice and data simultaneously.
• Making super-fast LAN connections. Today’s faster Ethernet speeds require the very high throughput provided by one or more T1 or E1 lines.
• Sending bandwidth-intensive data such as CAD/CAM, MRI, CAT-scan images, and other large files.

Scaling T1
Basic T1 service supplies a bandwidth of 1.536 Mbps. However, many of today’s applications demand much more bandwidth. Or perhaps you only need a portion of the 1.536 Mbps that T1 supplies. One of T1’s best features is that it can be scaled up or down to provide just the right amount of bandwidth for any application.

A T1 channel consists of 24 64-kbps DS0 (Digital Signal [Zero]) subchannels that combine to provide 1.536 Mbps throughput. Because they enable you to combine T1 lines or to use only part of a T1, DS0s make T1 a very flexible standard.

If you don’t need 1.536 Mbps, your T1 service provider can rent you a portion of a T1 line, called Fractional T1. For instance, you can contract for half a T1 line—768 kbps—and get the use of DS0s 1–12. The service provider is then free to sell DS0s 13–24 to another customer.

If you require more than 1.536 Mbps, two or more T1 lines can be combined to provide very-high-speed throughput. The next step up from T1 is T1C; it offers two T1 lines multiplexed together for a total throughput of 3.152 on 48 DS0s. Or consider T2 and get 6.312 Mbps over 96 DS0s by multiplexing four T1 lines together to form one high-speed connection.

Moving up the scale of high-speed T1 services is T3. T3 is 28 T1 lines multiplexed together for a blazing throughput of 44.736 Mbps, consisting of 672 DS0s, each of which supports 64 kbps.

Finally there’s T4. It consists of 4032 64-kbps DS0 subchannels for a whopping 274.176 Mbps of bandwidth—that’s 168 times the size of a single T1 line!

These various levels of T1 service can by implemented simulta-neously within a large enterprise network. Of course, this has the potential to become somewhat overwhelming from a management standpoint. But as long as you keep track of DS0s, you always know exactly how much bandwidth you have at your disposal.

T1’s cousin, E1, can also have multiple lines merged to provide greater throughput. collapse

Results 111-120 of 209 << < 11 12 13 14 15 > >> 
Close

Support

Delivering superior technical support is our highest priority. Depending on the products or services we provide for you, please visit your appropriate support area.



 
Print
Black Box 1-877-877-2269 Black Box Network Services