December 4, 2018
By Lynnette Reese, Editor-in-Chief, Embedded Systems Engineering
In the past, growth has primarily been driven by processors with architecture that has been focused on computation. However, the next generation of computing has shown itself to be more memory-centric, with a lake of data in the center of processing power.
Semiconductor markets are seeing the emergence of memory-centric computing. The current trends of the data economy and Artificial Intelligence (AI) seem to be driving growth in memory. Backing this up, SEMI (semi.org), a global industry association serving the manufacturing supply chain for the electronics industry, has increased its worldwide semiconductor revenue forecast for the latter half of 2018 to 15 percent higher than 2017 revenues,rather than the original 7.5 percent higher. The growth engine for chip revenue has been processors with computation-focused architecture—think desktops and supercomputers. This type of processor is physically placed in the middle of the cluster on the die and interfaces with the southbridge, while the northbridge has interfaces to the CPU, DRAM, PCIe, and to the hard drive or a Solid-State Drive (SSD). However, the next generation of computing is proving to be more memory-centric, with a lake of data in the center of processing power.
New Data Economy Brings Changes
Cloud services like Amazon Web Service (AWS) access a centralized “data lake” that is easily 100 or 200 TB of DRAM. According to AWS, a data lake “allows you to store all your structured and unstructured data at any scale. You can store your data as-is, without having first to structure the data.”[i] The surrounding processors access the data lake for different purposes. GPUs might be tuned for machine learning, FPGAs might be accelerating other processes, and other processors might just be driving data traffic. However, the relatively new data economy and cloud-related elements such as the data lake architecture are one of the factors that have been propelling the growth in memory sales over the last two or three years. DRAM prices increased dramatically due to shortages in 2018, although DRAMExchange indicates that memory prices may drop by as much as 15% – 25% next year as a reaction to the oversupply.[ii]
Mike Howard, VP of DRAM and Memory Research and Walt Coon, VP of NAND and Memory Research, both at Yole Developpement, maintain that “In the last two years, the DRAM and NAND memory business hit record-high revenues. The industry announced an impressive 32% CAGR between 2016 and 2018, with revenue growing from US$ 77 billion to an estimated US$ 177 billion.”[iii] Both DRAM graphics cards and servers were the cause of the high demand for DRAM in the recent past. The latest innovation in memory is 3D NAND, in which memory cells are stacked in multiple layers. 3D NAND memory is enabling a new storage solution primarily due to all the data that we are creating, saving, and analyzing. Up to recent times, storage has been about hard drives versus flash memory or SSD. Flash memory is fast, but it is more expensive, whereas the hard drive is slower but less expensive. 3D NAND technology is closing the cost gap rapidly.
Two years ago, the industry started with eight layers and then moved quickly to 32 layers. Thus, for the same area of silicon, you have quadrupled outputs. Today, the industry mainstream is 64 layers for 3D NAND, and the industry has begun on 96 layers of 3D NAND. Therefore, in the same area of the silicon where fabs used to make one transistor, they can now craft 96 stacked transistors.
The cost and the performance of flash memory are progressing rapidly and approaching the value of hard disk drives. The replacement of hard drives and the demand for more data storage are the present drivers of 3D NAND growth. Altogether, logic is progressing, memory has entered the center stage, and storage is transitioning from mechanical storage to digital storage. 3D NAND growth is lowering the cost of and improving the performance of flash memory.
Challenges still exist for memory, however. As Yole’s Coon and Howard state, “From a technological perspective, it continues to get more and more difficult to grow bit output on the wafer, which is a key for driving down cost per bit for both DRAM and NAND. The former is constrained by lithography shrinks, while the latter is constrained by limits on 3D stacking and wafer throughput losses as wafer processing time has increased significantly due to the transition from planar (2D) to 3D NAND.” [ii]
An area that most ignore include legacy nodes or anything above 20nm. Currently, legacy node chips are growing in importance because two of the fastest growing sectors for the semiconductor industry are automotive and industrial. The automotive and industrial sectors do need some CPUs for processing, but they need many more inexpensive sensors, microcontrollers, and power management devices. Five years ago, German automakers had maybe 300 integrated chips in each automobile. Today, a high-end German car can have as many as 8,000 integrated chips. Since the majority are sensors and such, the automotive sector is driving the legacy (or mainstream) nodes.
Cisco’s Visual Networking Index: Forecast and Trends, 2017–2022 predicts almost a 3x global increase in IP traffic in just five years. Of that traffic, 46% will be mobile traffic.[iv] Flash memory has enabled traffic growth and is replacing inferior storage media.
One example of the growth in data created by, used by, and stored by society can be seen in the progression of available storage mobile phones. Consumers used to buy the iPhone with 8 GB of NAND memory. Then 16 GB became the premium version for iPhone, whereas today most are buying iPhones with 128 to 256 GB of NAND memory. The Samsung Galaxy Note 9 is available with one terabyte of NAND memory. Data creation is the fundamental driver for many of the present industry trends.
With so much data, we need to analyze and discard the garbage, because not much can be done with too much data. The need for data analysis leads to the need for artificial intelligence and data science. All of the above interact with each other and create a very healthy industry.
Mike Howard is member of the memory team at Yole Développement (Yole) as VP of DRAM and Memory Research. Howard’s mission at Yole is to deliver a comprehensive understanding of the entire memory and semiconductor landscape (with special emphasis on DRAM) via market updates, Market Monitors, and Pricing Monitors. He is also deeply involved in the business development of all memory activities. Howard has a deep understanding of the DRAM and memory markets with a valuable combination of industry and market research experience. For the decade prior to joining Yole, Howard was the Senior Director of DRAM and Memory Research at IHS. Before IHS, Howard worked at Micron Technology, where he had roles in corporate development, marketing, and engineering.
Howard earned a Master of Business Administration at Ohio State University (United-States), a Bachelor of Science in Chemical Engineering, and a Bachelor of Arts in Finance at the University of Washington.
Walt Coon joined Yole Développement’s memory team as VP of NAND and Memory Research, part of the Semiconductor & Software division. Walt is leading the day-to-day production of both market updates, Market Monitors and Pricing Monitors, with a focus on the NAND market and semiconductor industries. In addition, he is deeply involved in the business development of these activities. Coon has significant experience within the memory and semiconductor industry. He spent 16 years at Micron Technology, managing the team responsible for competitor benchmarking, and industry supply, demand, and cost modeling. His team also supported both corporate strategy and Mergers & Acquisitions analysis. Previously, he spent time in Information Systems, developing engineering applications to support memory process and yield enhancement.
Coon earned a Master of Business Administration from Boise State University (Idaho, United-States) and a Bachelor of Science in Computer Science from the University of Utah (United-States).
Lynnette Reese is Editor-in-Chief, Embedded Intel Solutions and Embedded Systems Engineering, and has been working in various roles as an electrical engineer for over two decades.