Achieving Solid Link Performance and Desired Link Distances with Singlemode Fiber

Having all new technologies and products available in the data center market, it is beneficial to plan in advance for potential amendments and upgrades. No matter which option you carry out, low-loss, high-bandwidth fiber cable used in conjunction with low-loss fiber connectors will always provide solid link performance and desired link distances with the number of connections you require.

As we’ve mentioned in earlier blogs, it is imperative to understand the power budget of new data center architecture, as well as the desired number of connections in each link. The power budget indicates the amount of loss that a link (from the transmitter to the receiver) can tolerate while maintaining an acceptable level of operation.

This blog equips you with singlemode fiber (SMF) link specifications so your fiber connections will have sufficient power and reach and desired link distances. Unlike multimode fiber (MMF), SMF has virtually unlimited modal bandwidth, especially operating at the zero-dispersion wavelength 1300 nm range, where material dispersion and waveguide dispersion cancel each other out.

Typically, a singlemode laser has a much finer spectral width; the actual reach limit isn’t bound by the differential modal dispersion (DMD) like it is in multimode fiber.

Read full article

The Right DC Supply Chain Can Improve Speed to Market

Trasfering capacity online faster, without sacrificing reliability or performance, is crucial for hyperscale and colocation data center projects, as providers and tenants continue to require additional equipment to support their growing infrastructure.

Recently reflecting on a panel discussion at last year’s CAPRE San Francisco Data Center Summit, which covered the top three things on the minds of data center industry executives today. In order of importance, their concerns were:

  1. Security
  2. Meantime to deploy
  3. Customer satisfaction

While all of these things are significant, No. 2 struck a chord. The ability to deploy data center capacity rapidly and efficiently can mean the difference between going live – or going broke! Meantime to deploy is not a concern that just popped up at a conference – rapid, on-time deployment has been a priority in the data center industry from Day One!

How can you reduce the amount of time it takes to “go live” for a tenant (or for your enterprise)? You could try to achieve better speed to market by working harder and faster, hiring more people and putting in longer hours. But there are only so many hours in the day – and only so much money in the budget.

Read full article

Time Sensitive Networking – 3 Benefits it Will Bring to Railway Communication

As demand for mass transit expands in densely populated urban areas, so do passenger demands for more entertainment, on-time delivery and safety. The Industrial Internet of Things (IIoT) and impending technologies like Time-Sensitive Networking (TSN) are making this feasible.

TSN is a novel technology, currently in development at the Institute of Electrical and Electronics Engineers (IEEE), that provides an entirely new level of determinism in standard IEEE 802.1 and IEEE 802.3 Ethernet networks. Standardizing Ethernet networks with TSN will deliver an important capability: deterministic, time-critical packet delivery.

It represents the next measure in the evolution of dependable and standardized automation technology and is certainly the next step in improving railway communication.

Time-Sensitive Networking Will Be Key for Railway Communication

Communication-based train control (CBTC), which uses wireless technologies to continually monitor and control the position of trains, could use TSN to guarantee real-time delivery of critical safety data on Ethernet networks also carrying non-safety related data. Ethernet networks standardized with TSN will support higher data bandwidths and reduce the number of devices required for railway communication. Ultimately, with more information being transmitted across railway Ethernet networks, TSN will ensure that the most critical data is prioritized to assure operations.

What does railway communication look like today, without TSN? The process is like a police car and a truck sharing a one-lane road: Imagine that a truck, (which represents non-time-critical information), is driving along a one-lane road and can’t see anybody behind or in front of him on the road. So, he drives the truck onto the next section of the road. But just as the truck enters this section, a police car (representing time-critical information) with emergency lights arrives and wants to overtake the truck to quickly reach an emergency situation further down the road. unfortunately, the truck has already turned onto the next section of the one-lane road and cannot move out of the way, causing an unexpected delay to the police car!

Read full article

How Cabling Parameters Impact DSP

When dealing with subpar cable and patch cords, it can be frustrating to locate what can cause dropped links – and ultimately downtime and business interruption. When cables aren’t constructed appropriately, performance can be impacted by movement, such as being knocked or bumped, or even frequent moves, adds and changes.

In these situations, return loss of the patch cord can be changed to a point to invalidate the digital signal processing (DSP), or echo cancellation, and cause the link to go down until a new set of parameters is calculated.

As the demands for signal transmission continue to increase, and the tolerance for downtime continues to diminish, the issue of maintaining characteristic impedance for cable becomes even more important.

Keep the Eye Clean

Designers of digital systems often look at the digital signal on an oscilloscope to view its eye pattern. An eye pattern is obtained by superimposing actual waveforms for large numbers of transmitted or received symbols. Eye patterns are used to estimate the bit error rate and the signal-to-noise ratio.

Read full article

Introducing Magnum 5RX Security Router

This ruggedized device delivers high-performance routing and advanced firewall function while ensuring network security. This is your moment to reduce total infrastructure costs, especially in high-volume deployments and highly distributed networks.

 

Ultimate Performance and Reliability in a 2-in-1 Package

Integrating advanced firewall security and routing in a fixed configuration, the Magnum 5RX Security Router provides current and legacy network interfaces and a valuable migration path to the new generation of network backbones. Features eight DB9-DTE serial ports along with standard six Gigabit Ethernet ports and one WAN (T1E1 or DDS) port.

  • Combined 2-in-1 solution
  • Ensures optimal performance
  • Total network support with Magnum series

GarrettCom Magnum 5RX Fixed Configuration Security Router offers a cost-efficient, two-in-one solution for industrial energy and utility applications.

The Magnum 5RX Security Router is a mid-level, industrial-grade security router serving the power generation, transmission and distribution markets by delivering an efficient edge-of-network solution.

Offering advanced routing and security capabilities in a single platform, the new router provides a natural migration path for customers planning a move to next-generation, high performance Gigabit Ethernet and Transmission Control Protocol/Internet Protocol (TCP/IP) technology.

 

Combined two-in-one solution

  • Routing and security functionalities in a single device for streamlined management
  • Fixed configuration for a cost-effective system, especially in highly distributed deployment scenarios

Read full article

Upgrade: 100G Networks and Beyond with Installed-Base Multimode Fiber

Global IP traffic has been increasing rapidly in enterprise and consumer segments, driven by growing numbers of Internet users and connected devices, faster wireless and fixed broadband access, high-quality video streaming and social networking.

Data centers are being built to support the more robust computing, storage and content delivery services these users require.

Since 2016, 25G/50G server ports and 100G switch (ToR, leaf, spine and core) ports have become ubiquitous in most hyperscale data centers, replacing previous 10G servers and 40G switches. This speed migration has boosted overall system throughput by 2.5x with small incremental costs. According to Dell’Oro’s forecast, total 100G switch port shipments will outnumber 40G switch port shipments in 2017-2018.

According to our recent survey with Mission Critical magazine, many enterprise data centers have started planning for access network migration to 25G and aggregate/core network migration to 100G; some organizations have already started to consider 50G/200G/400G down the road.

When the survey asked, “Which structured cabling type will be deployed in your new data center facility?”, unsurprisingly, multimode fiber cabling was still the most popular data transmission media for structured cabling.

Brownfield and Greenfield Multimode Fiber Cabling in Data Centers

As data center speeds go up, layer 0 (the physical media for data transmission) becomes increasingly critical to ensure link quality.

Web 2.0 companies such as Google, Amazon, Microsoft and Facebook started migration to 100G in 2015. Many of their new deployments use singlemode fiber to best suit hyperscale data center architecture.

Read full article

Fiber Infrastructure Deployment: Validate Link Budget

Prior to deploying a new fiber cabling infrastructure, or reusing the installed infrastructure, it’s vital to understand the link budget of the selected speed and transceivers in the new architecture, as well as the desired number of connections in each link.

In new fiber infrastructure deployment, more stringent link budget specifications will need higher-quality passive optical components with reduced channel insertion loss in the link. Typically, the low-loss connector not only allows more connections, but also supports longer links with solid performance.

As you get ready for new fiber infrastructure deployment, there are four essential checkpoints that you should keep in mind:

  1. Determine the active equipment I/O interface based on application types
  2. Choose optical link media based on reach and speed
  3. Verify optical fiber standards developed by standards bodies
  4. Validate optical link budget based on link distance and number of connection points

In a series of blogs, we have discussed these checkpoints. This blog covers the final checkpoint (No. 4): validating the optical link budget based on link distances and number of connection points.

 

Validating the Multimode Link Budget

The current available ultra-low-loss adaptor is 0.2 dB for MPO-8/12 and 0.35 dB for MPO-24 per connection. These enhancements have been achieved by a combination of new material and polishing methods.

read full article

Budgeting Sufficient Power: Key to Future-proof Fiber Infrastructure

With the technology transformations happening in today’s enterprises, many types of organizations – from hotels and gaming facilities to schools and offices – are deploying new fiber cabling infrastructure.

However, it’s crucial to understand the power budget of the new architecture, as well as the desired number of connections in each link. The power budget indicates the amount of loss that a link (from the transmitter to the receiver) can tolerate while maintaining an acceptable level of operation.

This blog provides you with multimode fiber (MMF) link specifications so you can ensure your fiber connections have sufficient power for best performance. In an upcoming blog, we’ll cover the link specifications for singlemode fiber.

 

 Attenuation and Effective Modal Bandwidth

The latest IEC and ANSI/TIA standards ratified the maximum cabled fiber attenuation coefficients for OM3 and OM4 to 3.0 dB/km for cabled fiber at 850 nm. Attenuation is also known as “transmission loss,” and is the loss of optical power due to absorption, scattering, bending, etc. as light travels through the fiber. OM4 can support a longer reach than OM3, mainly due to its better light-confining characteristics, defined by its effective modal bandwidth (EMB).

Read full article

Checkpoint 3: Optical Fiber Standards for Fiber Infrastructure Deployment

To reinforce the expanding cloud ecosystem, optical active component vendors have designed and commercialized new transceiver types under multi-source agreements (MSAs) for dissimilar data center types; standards bodies are incorporating these new variants into new standards development.

For example, IEEE 802.3 taskforces are working on 50 Gbps- and 100 Gbps-per-lane technologies for next-generation Ethernet speeds from 50 Gbps to 400 Gbps. Moving from 10 Gbps to 25 Gbps, and then to 50 Gbps and 100 Gbps per lane, creates new challenges in semiconductor integrated circuit design and manufacturing processes, as well as in high-speed data transmission.

Getting ready for new fiber infrastructure deployment to accommodate these upcoming changes, there are four essential checkpoints that we think you should keep in mind:

  1. Determine the active equipment I/O interface based on application types
  2. Choose optical link media based on reach and speed
  3. Verify optical fiber standards developed by standards bodies
  4. Validate optical link budget based on link distance and number of connection points

The first blog published on March 23, 2017 – we are discussing these checkpoints, describing current technology trends and explaining the latest industry standards for data center applications. This blog covers checkpoint No. 3: verifying optical fiber standards developed by standards bodies.

Read full article

Rack Scale Design: “Data-Center-in-a-Box”

The “data-center-in-a-box” concept is becoming a reality as data center operators look for explanations that are easily replicated, scaled and deployed following a just-in-time methodology.

Rack scale design is a modular, efficient design approach that supports this yearning for easier-to-manage compute and storage solutions.

What is Rack Scale Design?

Rack scale design solutions serve as the building blocks of a new data center methodology that incorporates a software-defined, hyper-converged management system within a concentrated, single rack solution. In essence, rack scale design is a design approach that supports hyper-convergence.

Rack scale design is changing the data center environment. Read on to discover how the progress to a hyper-converged, software-defined environment came about; its pros and cons; the effects on the data center infrastructure; and where rack scale design solutions are headed.

What is Hyper-Convergence?

Two years ago, the term “hyper-convergence” meant nothing in our industry. By 2019, however, hyper-convergence is expected to be a $5 billion market.

Offering a centralized approach to organizing data center infrastructure, hyper-convergence can collapse compute, storage, virtualization and networking into one SKU, adding a software-defined layer to manage data, software and physical infrastructure. Based on software and/or appliances, or supplied with commodity-based servers, hyper-convergence places compute, storage and networking into one package or “physical container” to create a virtualized data center.

Read full article

Copyright © 2023 Jaycor International
Engineered by: NJIN Agency