Dual-Power Feeds in Data Centers

Things like always-on technology, streaming content and cloud adoption are creating high demand for efficient, resilient and fast data centers that never let us down.

To meet these needs, dual-power feeds – two independent electrical feeds coming into a data center from the utility company – are becoming more common to reduce the chance of a complete outage (or not having enough power). This type of power set-up is often seen in Tier 4 data centers. If one of the two power sources suffers from an interruption, the other source will still supply power.

Generally labeled “A” and “B” feeds, each power source has not only its own utility feed, but also:

  • A backup generator
  • A switch that alternates between A and B feeds
  • Electrical and distribution switchboards
  • An uninterruptible power supply (UPS)
  • A power distribution unit (PDU)
  • Rack-level PDUs

At any one of these points along the chain, failure can occur. A true dual-power feed means that there are two separate sets of these components operating independently, reducing the likelihood of downtime due to failure.

Today, most mission-critical IT equipment, such as servers and switches, are also designed with at least dual power supplies. When everything is running normally, the equipment pulls power equally from both power feeds. In the event of an outage, however, the IT equipment can automatically switch all power to one feed or the other.

Read full article

Network Upgrades: Utilizing Parallel Fiber Cabling

It comes to no surprise, that enterprise and consumer demands are impacting data centers and networks. As speed requirements go up, layer 0 (the physical media for data transmission) becomes increasingly critical to ensure link quality.

Numerous organizations are looking for an economical, futureproof migration path toward 100G (and beyond). Multimode fiber (MMF) cabling systems continue to be the most popular, futureproof cabling and connectivity solution.

Both duplex and parallel cabling are options for network upgrades. A few weeks ago, we discussed duplex MMF cabling. In this, we’ll discuss parallel MMF cabling.

 

Parallel Fiber Cabling

When transceiver technology can’t keep up with Ethernet speed requirements, the most obvious solution is to move from duplex to parallel fiber cabling.

Although using BiDi (bi-directional) and SWDM (shortwave wavelength division multiplexing) transceivers can reduce direct point-to-point cabling costs, they do not support breakout configuration (e.g. 40G switch ports to four 10G server ports), which is a very common use in data centers.

According to research firm LightCounting, approximately 50% of 40GBASE-SR4 QSFP+ form factors are deployed for breakout configuration; the other 50% are deployed for direct switch-to-switch links.

As a matter of fact, 40G QSFP+ and 100G QSFP28 are the most popular form factors used for Ethernet switches in data centers. QSFP (quad small form-factor) is a bi-directional, hot-pluggable module mainly designed for datacom applications. QSFP+/QSFP28 has a 2.5x data density compared to SFP+/SFP28, using four parallel electrical lanes. The optical interface is a receptacle for MPO female connectors. Four fibers (1, 2, 3 and 4) transmit the signal; the other four fibers (9, 10, 11 and 12) receive the optical signal.

QSFP transceivers, paired with parallel fiber connectivity with a one-row MPO-12 (Base-8 or Base-12) interface, can support flexible breakout or direct connection.

  • 40G/100G direct links are typically used in switch-to-switch links, which can be supported by duplex or parallel fiber cabling.
  • 40G/100G Ethernet ports can be configured as 4x 10G or 4x 25G ports to support 10G/25G server uplinks.
  • 40G/100GBASE-SR4 transceivers only use eight fiber threads in an MPO-12 connector; therefore, Base-8 is a cost-optimized cabling solution that allows 100% fiber utilization.

Read full article

Analyzing Data Center Energy Consumption By Using Business Metrics

About five years ago, the industry first heard about Digital Service Efficiency (DSE) – a method that was designed by eBay to help the company capture a holistic picture of their data center energy consumption and performance.

The initiative was then made public in an effort to assist other organizations establish their own data center energy consumption benchmarks and goals, and compare live system performance against those benchmarks and goals to determine actual efficiency levels.

While they tracking their data center’s power usage effectiveness (PUE), which illustrates how efficient a data center’s electrical and mechanical systems are, they felt like something was missing. Calculating PUE didn’t offer them insight into how efficiently their data center equipment (such as servers) was being used. The DSE initiative was formed to fill this gap.

Earlier this year, the team of eBay engineers who created the DSE initiative received a patent for it. With this news, we thought it would be a good time to revisit the data center productivity metric they introduced a few years ago. Even though it was created based on eBay’s core competency – e-commerce – there are still some lessons to be learned.

In eBay’s case, to measure performance and data center energy consumption, they chose to specifically measure how many online business transactions are completed per kilowatt-hour consumed. They calculated this by analyzing four metrics:

  1. The type of performance they wanted to measure (transactions, or the number of online purchases and sales)
  2. Cost per transaction (they measured cost per megawatt-hour, per user and per server)
  3. Environmental impact (amount of carbon dioxide produced per transaction)
  4. Revenue per transaction (they measured revenue per transaction, per megawatt-hour and per user)

Then they base their data center improvement goals around those metrics – goals like reducing cost per transaction by a certain percentage, for example, or increasing transactions per kilowatt-hour by a certain percentage.

The organization believes that, by substituting your own unique business metric in place of the metric they used – online business transactions – you’ll be able to create your own, unique way of measuring data center productivity and efficiency, too.

What performance metric could you use to measure and benchmark data center energy consumption? Here are a few ideas:

  • Healthcare: number of patients seen or number of appointments set
  • Hospitality: number of guests who stay onsite or number of reservations
  • Manufacturing: number of widgets produced
  • Financial: number of transactions

Read full article

Supporting Your Future of Network Technology: 6 Ways to Design Layer 0

The year 2014 was a key moment for the structured cabling industry. That is when the number of devices on the Internet officially surpassed the number of people on the Internet. In other words, we’re carrying and using more connected devices than ever before. Since then, Internet of Things (IoT) has begun to take over conversations about technology. Digital buildings – which feature a connected infrastructure to bring building systems together via the enterprise network – are moving to the forefront.

With these changes, how can you design your cabling infrastructure – your layer 0 – to support network technology changes? Every structured cabling system is unique, designed to fit a company’s specific needs. Taking the future into account during cabling projects helps maximize your investment while decreasing long-term costs. With correct planning and design, you’ll be ready for future hardware and software upgrades, be able to support increasing numbers of devices joining your network and will be set to accommodate higher-speed Ethernet migrations, such as 40G/100G.

We have gathered our best pieces of advice on how to design your layer 0 to support the future of network technology.

 

1. Abide by Cabling Standards

To provide guidance and best practices for the lifetime of your layer 0, following standards for structured cabling systems allows for the mix of products from different vendors and also helps in future moves, adds and changes:

  • TIA , North American standards for things like telecommunications cabling (copper and fiber), bonding and grounding, and intelligent building cabling systems
  • ISO/IEC, global standard harmonized with TIA networking standards
  • IEEE, which creates Ethernet-based standards for networks and relies on TIA and ISO/IEC layer 0 standards

2. Invest in High-Performance Cables

When your cabling system is designed to be used across multiple generations of hardware, it can remain in place longer while supporting fast and easy hardware upgrades.

Analyze how your business is currently run, as well as any expected business or technology shifts in the years to come. Then match these requirements with the performance characteristics of the cabling systems you’re considering.

Make sure that the category cabling can:

  • Support the full 100m distance per channel
  • Accommodate a tight bend radius inside wall cavities and other tight spaces
  • Support the highest operating temperature rating possible with low DC resistance
  • Maintain excellent transmission performance
  • Be bundled or tightly packed into trays and pathways without performance issues

Most Category 6A cables offer all of the benefits mentioned above, making Category 6A a solid decision that will support the future of network technology.

3. Find a Reputable Warranty

One of the best ways to ensure that your cabling and connectivity solutions will last is to find products that are backed by extensive and impressive warranties (such as a 25-year warranty).

When layer 0 is properly designed and installed, the structured cabling system will support your short-term and long-term needs. A reliable warranty ensures that this happens. For example, with a 25-year warranty, the installed system should meet or exceed industry standards for 25 years, as well as support future standards and protocols. If this isn’t the case, the manufacturer should address the issue.

Read full article

The Evolution of Wireless Standards

In the late 1990’s, one of the first wireless standards was carried out. You may remember IEEE 802.11b – the first wireless LAN standard to be widely adopted and incorporated into computers and laptops. A few years later on came the IEEE 802.11g, which offered signal transmission over relatively short distances at speeds of up to 54 Mbps. Both standards operated in the unlicensed 2.4 GHz frequency range. In 2009, IEEE 802.11n (which operated in 2.4 GHz and 5 GHz frequency ranges) was a big step up. It provided anytime wireless access and was the de facto standard for mobile users.

Understanding wireless technology and standards like these is key to making sure you are investing in technology and equipment that can support your organisation’s short-term and long-term network-connection requirements. Wireless standards layout specific specifications that must be followed when hardware or software are designed related to those standards.

Now that we have covered the major wireless standards of the past, let’s look ahead at current standards – and what is yet to come.

 

 

General-Purpose Applications

Today’s wireless standards, like IEEE 802.11ac (Wave 1 and Wave 2), operate in the 5 GHz frequency range. This standard is used for many general-purpose, short-range, multi-user applications, like connecting end devices to networks.

As we have mentioned in previous blogs, IEEE 802.11ax is the “next big thing” in terms of wireless standards. As the successor to 802.11ac, 802.11ax operates in both the 2.4 GHz and 5 GHz frequency spectrums. It will offer 10G speeds, and the ability for multiple people to use one network simultaneously with fewer connectivity problems (and while still maintaining fast connection speeds). It will improve average throughput per user by a factor of at least four as compared to 802.11ac Wave 1.

 

High-Performance Applications

Operating at an unlicensed frequency of 60 GHz are IEEE 802.11ad and IEEE 802.11ay, which are used primarily for short-range, point-to-point applications vs. point-to-multipoint applications. 802.11ay is an update to 802.11ad, improving throughput and range. As compared to 802.11ad, 802.11ay can offer speeds between 20Gbps and 40Gbps, as well as an improved range.

 

IoT Applications

Operating at lower frequencies are standards like 802.11af (UHF/VHF) and 802.11ah (915 MHz). These standards are designed for extended-range applications, like connecting hundreds of remote Internet of Things (IoT) sensors and devices. They’re also used in rural areas.

Because they operate in lower-frequency ranges, they’re able to offer extended operational ranges. They can carry signals for miles, but have a low throughput of 350 Mbps.

Read full article

Public vs Private Clouds: How Do You Choose?

An Intel Security survey of 2,000+ IT professionals last year revealed several fascinating information about public and private cloud adoption. For starters, within the next 15 months, 80% of all IT budgets will have some income dedicated to cloud solutions.

Many enterprises are starting to rely on public and private clouds for a few simple reasons:

  • Most good public and private cloud providers regularly and automatically back up data they store so it is recoverable if an incident occurs.
  • Tasks like software upgrades and server equipment maintenance become the responsibility of the cloud provider.
  • Scalability is virtually unlimited; you can grow rapidly to meet business needs, and then scale back just as quickly if that need no longer exists.
  • Upfront costs are lower, since cloud computing eliminates the capital expenses associated with investing in your own space, hardware and software.

But before you decide you are moving to the cloud, you should know the differences between public and private clouds. Making a choice between public and private clouds often depends on the type of data you’re creating, storing and working with.

 

Public Clouds Defined

The public cloud got its kick start by hosting applications online – today, however, it has evolved to include infrastructure, data storage, etc. Most people do not  realise that they have been benefitting from the public cloud for years (before most of us even referred to “public and private clouds”). For example, any time you access your online banking tool or login to your Gmail account, you’re using the public cloud.

In a public cloud, data center infrastructure and physical resources are shared by many different enterprises, but owned and operated by a third-party services provider (the cloud provider). Your company’s data is hosted on the same hardware as the data from other companies. The services and infrastructure are accessible online. This allows you to quickly scale resources up and down to meet demand. As opposed to a private cloud, public cloud infrastructure costs are based on usage. When dealing with the public cloud, the user/customer typically has no control (and very limited visibility) regarding where and how services are hosted.

 

Private Clouds Defined

In a private cloud, infrastructure is either hosted at your own onsite data center or in an environment that that can guarantee 100% privacy (through a multi-tenant data center or a private cloud provider). In these third-party environments, the components of a private cloud (computing, storage and networking hardware, for example) are all dedicated solely to your organization so you can customize them for what you need. In some cases, you’ll even have choices about what type of hardware is used. No other organization’s data will be hosted using the equipment you use.

With an internal private cloud (one hosted at your own data center), your enterprise incurs the capital and operating costs associated with establishing and maintaining it. Many of the benefits listed earlier about choosing cloud services don’t apply to internal private clouds, especially since you serve as your own private cloud provider.

In organizations and industries that require strict security and data privacy, private clouds usually fit the bill because applications can be hosted in an environment where resources aren’t shared with others; this allows higher levels of data security and control as compared to the public cloud.

 

What’s a Hybrid Cloud?

Enterprises also have the opportunity to take advantage of both the public and private cloud by implementing a hybrid cloud, which combines the two.

For example, the public cloud can be used for things like web-based email and calendaring, while the private cloud can be used for sensitive data.

Read full article

The Impact of Patch Cord Types on the Network

Data Centers and the networks they support have expanded to be an integral part of every business. The software applications that keep mission-critical operations up and running in highly redundant, 24/7 environments rely on highly engineered structured cabling systems to connect the cloud to every user. Structured cabling is the foundation that supports data centers.

Although structured cabling is not as sexy as diesel-driven UPS systems or adiabatic cooling systems, it contributes a huge role in supporting the cloud. One important component of structured cabling that is often overlooked: patch cords.

Oftentimes, patch cords are purchased haphazardly and installed at the last minute. But the right patch cord type can improve the performance of your network. The proper design, specification, manufacturing, installation and ongoing maintenance of patch cord systems can help ensure that your network experiences as much uptime as possible.

A patch cord problem can wreak havoc on an enterprise, from preventing an airline customer from making a necessary reservation change to keeping a hotel guest from getting work done while on business travel.

What Drives Data Growth?

Explosive data growth due to social media, video streaming, IoT, big data analytics and changes in the data center environment (virtualization, consolidation and high-performance computing) means one thing: Data traffic is not only growing in bandwidth, but also in speed.

Another essential point is network design. Today’s network design, such as a leaf-spine fabric, makes the network flatter, which lowers latency – this makes the Ethernet and corresponding patch cord types incredibly important.

The Definition of a Patch Cord

A patch cord is a cable with a connector on both ends (the type of connector is a function of use). A fiber patch cord is sometimes referred to as a “jumper.”

Patch cords are part of the local area network (LAN), and are used to connect network switches to servers, storage and monitoring portals (traffic access points). They are considered to be an integral part of the structured cabling system.

Copper patch cords are either made with solid or  stranded copper; due to potential signal loss, lengths are typically shorter than connector cables.

A fiber patch cord is a fiber optic cable that is capped at both ends with connectors. The caps allow the cord to be rapidly connected to an optical switch or other telecommunications/computer device. The fiber cord is also used to connect the optical transmitter, receiver and terminal box.

Read full article

Network Cables; How Cable Temperature Impacts Cable Reach

There is nothing more disheartening than making a big investment in something that promises to deliver what you require – only to find out once it is too late that it is not performing according to expectations. What happened? Is the product not adequate? Or is it not being utilised correctly?

Cable Performance Expectations

This scenario holds true with category cable investments as well. A cable that can not fulfil its 100 m channel reach (even though it is marketed as a 100 m cable) can derail network projects, increase costs, cause unplanned downtime and call for lots of troubleshooting (especially if the problem is not obvious right away).

High cable temperatures are sometimes to blame for cables that don’t perform up to the promised 100 m. Cables are rated to transmit data over a certain distance up to a certain temperature. When the cable heats up beyond that point, resistance and insertion loss increase; as a result, the channel reach of the cable often needs to be de-rated in order to perform as needed to transmit data.

Many factors cause cable temperatures to rise:

  • Cables installed above operational network equipment
  • Power being transmitted through bundled cabling
  • Uncontrolled ambient temperatures
  • Using the wrong category cabling for the job
  • Routing of cables near sources of heat

In Power over Ethernet (PoE) cables – which are becoming increasingly popular to support digital buildings and IoT – as power levels increase, so does the current level running through the cable. The amount of heat generated within the cable increases as well. Bundling makes temperatures rise even more; the heat generated by the current passing through the inner cables can’t escape. As temperatures rise, so does cable insertion loss, as pictured below.

Testing the Impacts of Cable Temperature on Reach

To assess this theory, I created a model to test temperature characteristics of different cables. Each cable was placed in an environmental chamber to measure insertion loss with cable temperature change. Data was generated for each cable; changes in insertion loss were recorded as the temperature changed.

The information gathered from these tests was combined with connector and patch cord insertion loss levels in the model below to determine the maximum length that a typical channel could reach while maintaining compliance with channel insertion loss.

This model represents a full 100 m channel with 10 m of patch cords and an initial permanent link length of 90 m. I assumed that the connectors and patch cords were in a controlled environment (at room temperature, and insertion loss is always the same). Permanent links were assumed to be at a higher temperature of 60 degrees C (the same assumption used in ANSI/TIA TSB-184-A, where the ambient temperature is 45 degrees C and temperature rise due to PoE current and cable bundling is 15 degrees C).

Using the data from these tests, I was able to reach the full 100 m length with Belden’s 10GXS, a Category 6A cable. I then modeled Category 6 and Category 5e cables from Belden at that temperature, and wasn’t able to reach the full 100 m. Why? Because the insertion loss of the cable at this temperature exceeded the insertion loss performance requirement.

Read full article

Easy, Cost-Effective Way to Add Power with Industrial PoE Injectors

PoE Injectors can appease the growing power demands of energy-hungry devices in applications like physical security, transportation and automation – all in one device.

  • High-efficiency, low-waste power
  • Plug-and-play installation
  • Up to 240W of power from 8 ports

For recently developed or retrofit applications in need of maximum power without device limitations, these Power over Ethernet (PoE) injectors supply a high port count and up to 240 W of power.

PoE injectors join Hirschmann’s family of products built with industrial-grade housings and specific features to provide reliable power for industrial applications. They are the easiest and most cost-effective way to add high PoE power to both new and existing applications.

Benefits

  • Choose between active (integrated power supply) or passive (standalone module) devices for increased flexibility, depending on your needs.
  • Supports up to 240 W across 8 ports without load sharing, ensuring maximum power output. Each port can provide the maximum output power of 30 W.
  • Simple plug-and-play capability and compact size saves time and space while automatically detecting connected devices.

Features

  • Benefit from up to 8 available ports that deliver 30 W of power each
  • Enable PoE communication with a high number of devices using just one PoE Injector
  • Save costs with an all-in-one-solution and an efficient transfer of power (less wasted power) of >95 percent
  • Use in extreme environmental conditions, including wide temperature ranges (-45 °C to +85 °C for injector, -25 °C to +70 °C for injector plus power supply)
  • Install quickly and easily with automatic device detection and classification (IEEE 802.3at)
  • Meet important industry standards
    – Safety of Industrial Control Equipment: EN 60950-1, EN 61131-2, UL 60950
    – Transportation: EN 50121-4

Download Bulletin

Read full article

10 Factors to consider when Choosing a Rack PDU

In it’s simplicity, rack power distribution units (PDUs) are designed to provide electrical protection and distribute power to networking equipment within racks/cabinets. As the needs and requirements of data centers altar, so do options for rack PDU performance.

There are several questions to consider before selecting rack PDUs that will work well for your data center application. This list below will aid you in the right direction, ensuring that the PDUs you choose will fit the design of your data center today and in the future.

1. Type of Mount

Depending on where you want to station it, a rack PDU can be mounted horizontally or vertically. Installed horizontally inside the rack (taking up RU space) is one option; another option is to vertically mount a PDU on the back or side of the enclosure (not taking up any RU space). You will often see one vertically mounted PDU on the left side and one on the right side of a data center cabinet (although rack PDUs can be mounted on either side, based on preferences).

PDUs can be mounted so that power cords exit either at the bottom or top of the enclosure. (If your data center is on a slab, for example, the power cord needs to exit at the top of the enclosure because there is no raised floor for it to pass through.)

2. Amperage

Your power rating – the amount of sustained power draw a PDU can handle – determines the amperage level you’ll need. Why is this important? Because, for example, a PDU with a 30A fuse will blow if a 30A circuit experiences more than 30A of power for an extended period of time.

Per the National Electrical Code, 30A PDUs or higher are required to be equipped with a 20A breaker to prevent injury in the event of a short circuit.

3. Voltage

In addition to different amperages, there are different input voltage options for rack PDUs as well; 208/240V is the most common voltage output to computing gear, with a new trend moving toward 400V input. Confirm your infrastructure voltage, and you’ll know what type of voltage you need in your PDU.

4. Single- or 3-Phase Power

What type of input power do you have access to: single-phase power or 3-phase power? The type of power distribution in your data center will determine whether you need a single- or 3-phase PDU.

The difference involves where in the distribution system the phase is broken down. When it’s broken down at the distribution panel, power to the rack will be single-phase service (requiring single-phase rack PDUs). When all three phases are brought to each rack, then a 3-phase PDU is needed. In most data centers, the input power is 3-phase service.

Read full article

Copyright © 2023 Jaycor International
Engineered by: NJIN Agency