Blog Details

We desire to keep you informed and inspired with our latest blogs covering the hottest trends, insights, and news in the world of fintech and technology. So, check back regularly for fresh perspectives, expert opinions, and actionable insights to help you stay ahead of the curve in the ever-evolving world of fintech. The ball is in your court 

Picture of blogadmin

blogadmin

Enterprise EDGE Compute Taxonomy

As the world becomes increasingly connected, the proliferation of data generation sources, from smartphones to IoT sensors, has driven the need for computing power to reside closer to the source of data. This has given rise to the concept of edge computing, which brings processing, storage, and analytics capabilities nearer to the data generation point, rather than centralized in a distant data center.

In the enterprise context, we can categorize edge computing environments based on their proximity to the data source and the network topology. This “Edge Compute Taxonomy” helps organizations understand the various types of edge deployments and their optimal use cases.

Local Edge Compute Environments

At the closest proximity to the data generation source, we have the “Local Edge” compute environments. These are typically located within 30 miles of the data source and reside either on-premises or in close proximity to the access layer (or last mile) of the network.

Common locations for local edge include:

– Central offices

– Cable headends

– Baseband hotels

– Tower and rooftop sites

– Premises of the data generation source (e.g., stadiums, airports)

 

The key characteristic of local edge is the low latency and high bandwidth connectivity it provides to the data source. This makes it ideal for applications that require real-time processing, such as:

– Autonomous vehicles

– AR/VR experiences

– Industrial IoT and process automation

– Wireless infrastructure optimization

 

Regional Edge Compute Environments

Moving slightly further from the data source, we have the “Regional Edge” compute environments. These are typically located within 100 miles of the data generation point and reside closer to the network aggregation layer, such as regional data centers or network Points-of-Presence (PoPs).

Regional edge compute environments offer a balance between proximity to the data source and access to greater computing resources. This makes them suitable for applications that require more sophisticated processing, but still need to be closer to the edge than a centralized cloud or data center.

Example use cases for regional edge include:

– Content delivery and caching

– Local/regional data analytics and AI inference

– Distributed enterprise applications

– Remote office/branch office (ROBO) computing

 

Core Network/ Central Edge Compute Environments

At the furthest end of the spectrum, we have the “Central Edge” compute environments. These are located within 300 miles of the data source and are typically co-located with major network aggregation points, such as central data centers or cloud on-ramps.

Central edge environments provide the greatest computing power and storage capacity, but with higher latency compared to local and regional edge. They are well-suited for applications that require significant processing power, but can tolerate slightly higher latency, such as:

– Batch data processing

– Model training for AI/ML

– Centralized data lake/warehouse

– Disaster recovery and business continuity

 

By understanding this Edge computing taxonomy, enterprises can better align their edge computing strategies with the specific needs of their applications and data sources. This, in turn, allows them to optimize for factors like latency, bandwidth, cost, and computing resources across the distributed enterprise.

Empower Yourself for the Future through Learning​