Cowen to host conference call to discuss third quarter financial results on Friday, October 29. Click here for more information.

Livin on the Edge, Part 2: The Brains and Nervous System of Edge Computing

This is part two of a five part series on edge computing with a focus on computing and memory.

THE COWEN INSIGHT

Edge computing will require a shift in processing and memory resources to decentralized locations. Many of the qualities that have fueled cloud computing adoption will facilitate this new method of delivering computing resources, enabling new applications. There are key differences to edge computing versus cloud, and investors need to be aware as this market grows over the next decade to rival cloud computing. 

While communications infrastructure will require the largest architectural changes, MPUs and memory players will need to position themselves to capitalize on the burgeoning incremental edge market. As with enterprise and hyperscale data centers, digital semiconductors will serve as the central processing components of servers in this new form factor. This presents unique challenges and priorities that will need to be addressed.

We Expect Edge Cloud Computing to Follow a Similar Adoption Curve As Cloud

In the mid-2000s cloud showed promise but there was still plenty of pushback. Those in the technology and investment communities questioned the feasibility and value of a centralized/ shared model versus an on-prem and private computing model. We see edge computing following a similar, potentially exponential adoption curve.

x86 Continues to Dominate, But Competitors Have the Potential to Edge Their Way In

CPUs will continue to be the brains of all servers, with accelerators attached to them; edge cloud shouldn’t change that. However, CPUs at the edge will need to prioritize power efficiency, cooling, and ruggedness as they will be generally installed in less controlled environments.  With more focus on I/O, server CPU will largely remain driven by the same factors as within cloud – perf/watt/$.

Fragmented Memory Resource Allocation

Memory will indeed be impacted by the redistribution of compute resources to edge computing. Broadly speaking, edge workload requirements will demand DRAM in different flavors, quantities, and locations compared to centralized compute resources.

Accelerators – Taking a Larger Share of the TAM

The first decade of cloud computing’s ramp saw virtually the entire MPU TAM go to CPU players. This will not repeat in the edge computing market as accelerated computing architectures are entering the initial buildout phase with a proven model.


READ THE FIVE PART EDGE COMPUTING SERIES

Part 1: Evolving Tomorrow’s Internet focuses on communications services and cloud/internet.

Part 2: The Brains and Nervous System of Edge Computing focuses on computing and memory.

Part 3: Storage & Networking focuses on storage and networking.

Part 4: Bringing Analytics & Business Logic to Edge Compute focuses on software and security. .

Part 5: Enabling & Empowering Locally focuses on end-devices & services enabled by edge computing.

Get the Full Report

If you’re already a member of our Research portal, log in.

Log In

If not, reach out to us directly for more information.

More Like This

Podcast

Conversation with Cogent CEO Dave Schaeffer

Read More
Podcast

eVTOL Opportunities Overview with Textron’s Rob Scholl

Read More
Video

Where Is the Physical Edge?

Read More