#More than 18 Years of Experience

We offers high-quality alternative chips at lower prices, aimed at helping your Costdown.

 
  • Diversified product lines

  • Meet various chip requirements

  • Provide free samples

Learn More
About us
More than 18 Years of Experience
Our Advantages

Why Choose Us

Your ingenuity is what turns imagination into innovation. We give you access to the components you need to breathe life into your vision.
Learn More

Latest News

July 13, 2025
MCU, a big change
Introduction   In 2025, within only half a year, leading MCU manufacturers such as ST, NXP, and Renesas almost simultaneously released automotive MCU products equipped with new embedded storage (such as PCM and MRAM), breaking the long-standing technical pattern of MCU dominated by embedded Flash. Although it is still too early to talk about "standard configuration", it is certain that new storage has leapt from "attempt" to "strategic layout" and has begun to have a profound impact on the MCU ecosystem.   In the past, MCU was a "small and beautiful" device used for basic control logic. But in recent years, it is evolving towards "small and strong": the process has moved from traditional 40nm to 22nm, 16nm and even more advanced nodes; integrated AI acceleration, security unit, wireless module... Become a candidate for the "car brain" and "edge computing center". Behind this, a long-ignored but crucial technology is "making up for the shortcomings": the revolution of embedded storage technology (eNVM). Under the trend of "software-defined cars", OEMs and Tier1 manufacturers face unprecedented challenges: ECU complexity surges, functions are highly concentrated; OTA updates, AI reasoning, model loading, software "gets thicker and thicker"; storage space and read-write performance have become bottlenecks in the entire vehicle architecture. Traditional Flash has long been unable to keep up with the density, speed, power consumption and durability. In this context, new memories (PCM, MRAM) have become the key weapon for the evolution of MCUs.   ST chooses phase change memory (PCM)   Phase change memory (PCM) is an emerging non-volatile storage technology. Its basic principle is to store information through the phase change of materials (from amorphous to crystalline). The basic mechanism of PCM was invented by Robert Ovshinsky of Stanford University in the 1960s. STMicroelectronics has a patent license for this original development, and ST is the first manufacturer to truly implement PCM in automotive-grade MCUs. ST also introduced the working principle of PCM on its official website. PCM is made of germanium antimony tellurium (GST) alloy. During the manufacturing process, it uses the physical property of the material that can undergo rapid thermal control changes between amorphous and crystalline states. The above states correspond to logic 0 and logic 1, respectively, and can be electrically distinguished by the high resistance of the amorphous state (logic 0) and the low resistance of the crystalline state (logic 1). PCM supports read and write operations at low voltages and has several substantial advantages over Flash and other embedded memory technologies. Working principle of PCM (Source: ST) After years of research and development, in April 2025, ST launched Stellar with xMemory, a new generation of scalable memory embedded in its Stellar series of automotive microcontrollers. The core of Stellar xMemory is STMicroelectronics' proprietary phase change memory (PCM) technology. STMicroelectronics claims that it has the industry's smallest qualified storage bit unit, which can completely change the challenging process of developing software-defined vehicles (SDVs) and evolving electrification platforms. It is reported that ST's Stellar P and G series automotive MCUs will be equipped with the latest generation of PCM technology using xMemory. The Stellar P and Stellar G series are Stellar Integration MCUs suitable for centralized regional controllers, domain controllers and body applications. The first to be launched will be the Stellar P6 MCU, which is designed to meet the needs of new powertrain trends and architectures for electric vehicles (EVs) and will be put into production in the second half of 2025. Stellar with xMemory technology does not need to manage multiple devices with different memory options, nor does it need to bear the related development and certification costs. It only needs one innovative device with scalable memory to provide customers with efficient and economical solutions. This simplified approach from the beginning enables automakers to design for the future and leave more room for innovation later in the development cycle, thereby reducing development costs and accelerating time to market through a more streamlined supply chain.     Cross-section of an embedded PCM bit cell in FD-SOI technology, showing the heating device that quickly flips the memory cell between crystalline and amorphous states. ST points out that choosing the right MCU early in the SDV life cycle ensures that there is sufficient on-chip memory for future software development. Today, choosing too high a memory specification increases costs, while choosing too low a memory specification may require the subsequent search and re-qualification of other MCUs with additional memory, adding complexity, cost and delay. Stellar MCUs with xMemory are highly competitively priced, providing more cost savings, simplifying the OEM supply chain, and shortening certification time by extending product lifecycles and maximizing reuse between projects, thereby accelerating time to market.   NXP and Renesas, Embracing MRAM   Magnetoresistive RAM (MRAM) is another type of non-volatile storage "black technology". MRAM uses the physical properties of magnetic materials to achieve data storage, with ultra-high write speeds, low power consumption and extremely strong durability. MRAM has been widely adopted by companies such as NXP and Renesas. NXP is an early automotive MCU manufacturer to launch MRAM MCU. In March this year, NXP Semiconductors announced the launch of its S32K5 series of automotive MCUs, the industry's first MCU based on 16nm FinFET process with built-in MRAM, marking an important milestone in its development. The S32K5 series is designed to expand the NXP CoreRide platform, providing pre-integrated regional and electrification system solutions to support the evolution of scalable software-defined vehicle (SDV) architectures. Automakers are increasingly adopting partitioned architectures, each with its own unique approach to integrating and distributing the functions of electronic control units (ECUs). At the heart of these solutions is an advanced MCU architecture that combines real-time performance with low latency, deterministic communications and innovative isolation capabilities. The addition of high-performance MRAM significantly speeds up ECU programming, both at factory settings and during over-the-air (OTA) updates. MRAM writes more than 15 times faster than traditional embedded flash memory, enhancing the flexibility of automakers to deploy new software features throughout the life cycle of the vehicle. In July 2025, Renesas also released an MCU with built-in MRAM, but compared with NXP, the process is 22nm. The device is equipped with 1MB MRAM and 2MB SRAM. It is said that the use of MRAM is a major feature of the second-generation RA8 series. In addition to high durability and data retention, MRAM also has advantages such as high-speed reading and writing, no need to erase, and low power consumption. Renesas Electronics released MRAM high-speed reading and writing technology for high-performance microcontrollers at the International Semiconductor Integrated Circuit Conference (ISSCC 2024), and RA8P1 uses this technology. For applications that require larger memory capacity, the device is equipped with an eight-way SPI interface and a 32-bit external bus interface that supports XIP/DOTF. In addition, system-level package (SiP) products with integrated 4MB or 8MB external flash memory are also available. In terms of peripheral functions, it supports parallel camera input, MIPI-CSI2, serial audio input, and multimodal AI voice input through PDM. In addition, it is also equipped with a 16-bit AD converter, graphic HMI function, and various serial interfaces.   TSMC: MRAM and RRAM go hand in hand   As the world's leading foundry, TSMC has bet on two major technologies for new storage technologies: MRAM and RRAM.   At the 2025 Technology Seminar, Dr. Yujie Mi, Executive Vice President and Co-Chief Operating Officer of TSMC, pointed out: "eFlash technology has encountered expansion bottlenecks at the 28nm process node, and the new generation of NVM (non-volatile memory) must replace its role in more advanced processes."   As a result, TSMC clearly proposed to introduce two embedded storage technologies, RRAM and MRAM, into 22nm, 16nm, and 12nm, respectively, and further advance to 6nm and 5nm nodes.   TSMC is one of the few manufacturers that has achieved large-scale mass production of RRAM. At present, TSMC has achieved mass production of RRAM on 40nm, 28nm and 22nm processes, and has passed automotive-grade certification. 12nm RRAM has also entered the customer tape-out stage, and the 6nm version is underway. Infineon's new generation of AURIX MCU uses TSMC's eRRAM technology, which has become an important embedded storage solution for its automotive platform. The advantages of RRAM are: low process complexity, can be directly deployed in the back-end metal layer (BEOL); fully compatible with logic process, adaptable to multiple types of MCU architectures; especially suitable for power-sensitive and cost-intensive consumer and automotive applications. In contrast, although MRAM has a more complex process, it has superior performance characteristics: the write speed is more than ten times that of Flash; non-volatile storage + extremely strong durability; suitable for scenarios that require high-speed writing, frequent OTA updates, AI reasoning and other complex tasks. For in-vehicle computing platforms (such as ADAS, AI SoC, etc.) that pursue computing power density, data throughput and real-time performance, MRAM may be the most ideal storage replacement after eFlash. TSMC has currently achieved mass production of MRAM at the 22nm process node, 16nm MRAM has entered the customer preparation stage, and 12nm is under development. A more radical roadmap also includes future expansion to 5nm nodes. In May 2025, TSMC announced that it would set up its first European Design Center (EUDC) in Munich, Germany, focusing on R&D and customer support for MRAM storage technology for automotive applications. This center will become TSMC's tenth design center in the world and is scheduled to be officially opened in the third quarter of 2025. Its service areas include automotive, industrial, AI, telecommunications and the Internet of Things. This also means that TSMC not only promotes the popularization of new storage on the process platform, but also deepens the vehicle development ecosystem in its global layout. In addition to horizontally advancing process nodes, TSMC is also seeking technological breakthroughs in the following directions: 3D RRAM MCU: Promote embedded storage stacking packaging to free up more on-chip space; SOT MRAM (spin-orbit torque): Compared with traditional STT-MRAM, it has lower power consumption and faster writing speed, and is expected to enter large-scale mass production; Silicon photonics platform: Combine optical interconnection and storage interface, and layout for data centers and edge computing. The implementation of these technologies will further consolidate TSMC's leading position in specialty processes and embedded storage ecosystems. Storage and computing integration trend   Whether it is PCM, MRAM or RRAM, they are not only memory substitutes, but also catalysts for MCU architecture changes. New storage technologies such as PCM, MRAM and RRAM represent a deeper trend of "storage and computing integration", which is not just a simple storage medium replacement issue, but a coordinated evolution between storage architecture and computing architecture. In the field of MCU, the boundaries between storage and computing are becoming increasingly blurred. In traditional MCUs, storage and computing are separate modules. Computation is performed through the central processing unit (CPU) or dedicated accelerators, while storage is performed through external or internal flash memory, SRAM and other devices for data storage and management. However, with the complexity of computing tasks, especially the growing application demand for machine learning, AI reasoning and edge computing, the separation of storage and computing is becoming increasingly unsuitable. The addition of new memories such as MRAM and PCM provides a new opportunity for "storage and computing integration". In particular, PCM, through its phase change characteristics, not only has non-volatile storage functions, but also can play the role of "near computing" in some applications, reducing the bottleneck of data transmission and further accelerating the data processing process. MRAM's high-speed read and write characteristics also enable it to work with computing modules to improve processing efficiency in scenarios such as AI edge reasoning and real-time data processing. In today's AI edge, OTA fragmentation, and software agility, the "intelligence" of MCUs is increasingly dependent on memory capabilities. It is expected that future MCU architectures will increasingly combine storage and computing to create more efficient, flexible, and intelligent systems.   Conclusion   In the past decade, we have been accustomed to viewing MCUs as representatives of "control" systems, and their embedded storage is just a supporting component; but in the era of AI, SDV, and edge intelligence, storage is moving from behind the scenes to the front, becoming an integral core of computing architecture. This is not only a replacement of materials and an evolution of processes, but also a key step for MCUs to move from "usable" to "scalable" and "evolvable". In this wave of microcontroller upgrades triggered by embedded storage, we see not only the differentiation of the routes of leading manufacturers, but also the accelerated adaptation and evolution of the entire industrial chain - from foundry to tool chain, from automobiles to industrial applications. This transformation has just begun.
read more
  • July 12, 2025
    HBM, A New War
      Entering the "post AI" era, HBM is no longer just a standard component for high-performance AI chips such as GPUs and TPUs, but has evolved into a strategic high ground for fierce competition among semiconductor giants. Whether it's Samsung, SK Hynix, or Micron, these leading companies in the storage field all unanimously see HBM as a key engine for future revenue growth. They seem to have reached a consensus that in order to dominate the storage market, they must first master the core technology of HBM. So, in this competition without gunpowder, what technologies are worth paying attention to? Let's delve into the analysis together.   Is customization the only way out?   Customization may be one of the ultimate destinations of HBM. In fact, more than two years ago, when HBM was first emerging, Hynix and Samsung discussed the trend of customization. With cloud giants customizing their own AI chips, the demand for HBM has only increased and not decreased, making customization one of the inevitable needs. In August last year, SK Hynix Vice President Yoo Sung soo stated, "All M7 (Magnificent 7) refers to the seven major tech stocks in the S&P 500 index: Apple, Microsoft, Google Alphabet, Amazon, Nvidia, Meta, and Tesla. )Companies have come to us requesting customized HBM (High Bandwidth Memory). ” In June of this year, South Korean media reported that SK Hynix had simultaneously targeted companies such as Nvidia, Microsoft (MS), Broadcom, which are expected to become "heavyweight customers" in the customized HBM market. It has recently reached agreements with Nvidia, Microsoft, and Broadcom to supply customized HBM and has begun design work based on the needs of each company. It is reported that SK Hynix prioritizes the supply plan of its largest customer NVIDIA and determines the list of other customers. Industry insiders have stated that "considering SK Hynix's production capacity and the launch schedule of AI services from major technology companies, it is not possible to meet the needs of all M7 customers at once," but also pointed out that "considering the changes in the HBM market situation, there may be several new customers added in the future. SK Hynix also announced in April this year that it will shift towards customization starting from the seventh generation HBM (HBM4E) and has partnered with TSMC. We plan to adopt TSMC's advanced logic technology on the HBM4 basic die, and it is expected that the first batch of customized HBM products will be launched in the second half of next year, It is worth mentioning that due to SK Hynix's successful acquisition of multiple heavyweight clients, its likelihood of maintaining its dominant position in the next-generation customized HBM market has greatly increased. According to TrendForce data, SK Hynix currently holds a market share of approximately 50% in the HBM market, far surpassing Samsung Electronics (30%) and Micron (20%). If we only look at the latest HBM3E product, SK Hynix's market share is as high as 70%. On the other hand, Samsung Electronics has also been exposed to be in discussions with multiple customers regarding the supply of customized HBM. Given its recent success in supplying HBM3E to AMD, the world's second-largest AI chip manufacturer, the industry expects it to soon acquire customers for HBM4 and custom HBM as well. It is reported that Samsung is currently in specific negotiations with customers such as Broadcom and AMD regarding the HBM4 product. Compared to the two Korean manufacturers, Micron, located far away in the United States, appears much slower. In June of this year, Raj Narasimhan, Senior Vice President and General Manager of Micron Cloud's Memory Business Unit, stated that the production plan for HBM4 will be closely integrated with the readiness of customers' next-generation AI platforms to ensure seamless integration and timely expansion of production to meet market demand. It stated that in addition to providing the latest HBM4 to mainstream customers, customers are also seeking customized versions, and the development of the next generation HBM4E is also underway. Collaborating with specific clients to develop customized HBM solutions will further enhance the value of memory products. At this point, many people may want to ask, what are the benefits of customizing HBM, and why are DRAM manufacturers and cloud giants flocking to it? Firstly, it needs to be clarified that the key to customizing HBM (cHBM) lies in integrating the functionality of the base die into the logic die designed by the SoC team. This includes controlling I/O interfaces, managing DRAM stacks, and carrying direct access (DA) ports for diagnosis and maintenance. This integration process requires close collaboration with DRAM manufacturers, but it gives SoC designers greater flexibility and stronger control over access to the HBM core chip stack. Designers can integrate memory and processor chips more tightly and optimize between power consumption, performance, and area (PPA) based on specific applications. SoC designers can freely configure and instantiated their own HBM memory controllers, and directly interact with the HBM DRAM stack through DFI2STSV bridging. Logic chips can also integrate enhanced features such as programmable high-quality built-in self-test (BIST) controllers, chip to chip adapters (D2D adapters), and high-speed interfaces (such as the universal chip to chip interconnect standard UCIe), enabling communication with processor chips in a complete 3D stack. Due to the fact that the chip is manufactured using logic processes rather than DRAM processes, existing designs can be reused. One important advantage of customizing HBM is to significantly reduce the delay introduced by the intermediary in the data path, thereby reducing related power consumption and performance losses. It effectively shortens the distance between memory and processor chips by reusing existing high-speed bare chip interconnects (such as UCIe). This flexibility can be applied to various scenarios, such as cloud service providers using edge AI applications with extremely high cost and power requirements, as well as systems pursuing maximum capacity and throughput for complex AI/machine learning computing scenarios. However, customized HBM currently faces some challenges, as its entire concept is still emerging and the technology is in the early stages of development. Like all innovations, the road ahead is inevitably accompanied by challenges. Integrating basic chip functions into logic chips means that end users need to consider the entire lifecycle from the perspective of chip lifecycle management (SLM) - from design, trial production, mass production, to on-site applications. For example, after wafer level HBM chip stacking, the responsibility for screening DRAM cell defects will fall on end users. This raises some questions, such as how should users handle specific DRAM algorithms recommended by suppliers? And can users conduct comprehensive on-site testing and diagnosis of HBM during planned downtime? At present, to successfully deploy customized HBM, a complete ecosystem is needed, which brings together IP providers, DRAM manufacturers, SoC designers, and ATE (Automated Test Equipment) companies. For example, due to the large number and high density of interconnections, traditional ATE can no longer be used for customized HBM testing. In summary, customized HBM has become a major trend, and regardless of whether manufacturers like it or not, it will occupy a significant position in the HBM4 standard.   The technical challenge of mixed bonding that cannot be bypassed?   In addition to customization, hybrid bonding is also one of the important development directions for HBM in the future. At present, with the continuous increase of stacking layers, traditional welding techniques are facing significant challenges. The flux currently used can remove metal surface oxides and promote solder flow, but its residues can cause problems such as increased stack gaps and thermal stress concentration, especially in precision packaging fields such as high bandwidth memory (HBM), where this contradiction is more prominent. And even Samsung, SK Hynix, and Micron are considering using hybrid bonding technology in the next generation HBM. Let's first understand the current bonding technology of HBM chips. In traditional flip chip bonding, the chip is "flipped" so that its solder bumps (also known as C4 bumps) align with the bonding pads on the semiconductor substrate. The entire component is placed in a reflow oven and uniformly heated to around 200 º C-250 º C according to the solder material. The solder bump melts, forming electrical interconnection between the joint and the substrate. With the increase of interconnect density and the reduction of spacing to below 50 µ m, the flip chip process faces some challenges. Due to the entire chip package being placed in an oven, the chip and substrate will expand at different rates (i.e., different coefficients of thermal expansion, CTE) due to heat, resulting in deformation and interconnect failure. Then, the molten solder will spread beyond its designated area. This phenomenon is called solder bridging, which can cause unnecessary electrical connections between adjacent pads and may result in short circuits, leading to chip defects. This is where the TCB (Thermal Compression Bonding) process comes into play, as it can solve the problem of flip chip technology when the spacing is reduced below a certain point. The advantage of TCB is that heat is locally applied to the interconnect points through the heating tool head, rather than uniformly applied in the reflow soldering furnace (flip chip). This can reduce the heat transfer to the substrate, thereby reducing thermal stress and CTE challenges, and achieving stronger interconnections. Apply pressure to the chip to improve bonding quality and achieve better interconnection. The typical process temperature range is between 150 º C-300 º C, and the pressure level is between 10-200MPa. TCB allows for a higher contact density than flip chip, reaching up to 10000 contact points per square millimeter in some cases, but the main drawback of higher precision is lower throughput. Although the flip chip machine can achieve a throughput of over 10000 chips per hour, the throughput of TCB is in the range of 1000-3000 chips. The standard TCB process also requires the use of soldering flux. During the heating process, copper may oxidize and cause interconnect failures, and flux is a coating used to remove copper oxides. But when the interconnect spacing is reduced to 10 µ m or more, the flux becomes more difficult to remove and leaves sticky residue, which can cause minor deformation of the interconnect, leading to corrosion and short circuits. Fluxless bonding technology emerged as a result, but it can only further reduce the spacing size to 20 μ m, up to a maximum of 10 μ m, and can only be used as a transitional technology. When the I/O spacing is less than 10 μ m, hybrid bonding technology is required. Hybrid bonding technology achieves DRAM chip stacking through copper to copper bonding, eliminating the need for traditional bump structures. This approach not only significantly reduces chip size but also doubles energy efficiency and overall performance. According to industry insiders, as of May 7th, Samsung Electronics and SK Hynix are advancing the use of hybrid bonding technology for mass production of their next-generation HBM products. It is expected that Samsung will adopt this technology in the HBM4 (sixth generation HBM) as early as next year, while SK Hynix may be the first to introduce it in the seventh generation product HBM4E. The current fifth generation HBM - HBM3E still uses hot press bonding technology to fix and stack chips through heating, pressure, and bump connections. Samsung mainly purchases TC equipment from its subsidiary SEMES and Japan's Shinkawa Electric (SHINKAWA), while SK Hynix relies on Hanmei Semiconductor and Hanhua Semiconductor. Micron, which provides HBM to Nvidia, also purchases equipment from South Korea, the United States, and Xinchuan. With the initial opening of the hybrid bonding market, this technology is expected to trigger a major reshuffle in the semiconductor equipment field. Once successfully imported, hybrid bonding may become the mainstream process for future HBM stacking. In order to seize the opportunity, an American application materials company has acquired a 9% stake in Besi, the only company in the world with advanced production capabilities for hybrid bonding equipment, and has taken the lead in introducing its hybrid bonding equipment into the system level semiconductor market, seizing the application opportunity. At the same time, Hanmei Semiconductor and Hanhua Semiconductor are also accelerating the development of next-generation chip stacking equipment. These two Korean manufacturers are not only rapidly advancing the research and development of hybrid bonding equipment, but also actively developing solder bonding equipment to enhance market competitiveness. If customized HBM is a struggle between DRAM manufacturers and cloud giants, then hybrid bonding is a game between DRAM manufacturers and bonding device manufacturers. With HBM officially entering the HBM4 era in the second half of this year, the attention to hybrid bonding may further increase.   What other new technologies are there?   It is worth mentioning that in June of this year, the Korean Academy of Sciences and Technology (KAIST), a national research institution in South Korea, released a 371 page research paper systematically depicting the evolution path of HBM technology from HBM4 to HBM8. The content covers improvements in bandwidth, capacity, I/O interface width, thermal design, as well as packaging methods, 3D stacking structures, memory center architectures for embedded NAND storage, and even machine learning based power control methods. It is worth emphasizing that this document is not a product roadmap released by a commercial company, but an academic prediction of the potential evolution of future HBM technology based on current industry trends and scientific research progress. However, it is also enough to give us a glimpse into the possible development direction of HBM in the future.     Let's first take a look at the technical features of each generation of products from HBM4 to HBM8: HBM4: Pioneer of Customized Design. As the beginning of the new generation of HBM technology, HBM4's biggest innovation lies in customized basic die design. By integrating NMC (Near Memory Computing) processors and LPDDR controllers, HBM4 enables direct access to HBM and LPDDR without the need for CPU intervention. This design significantly reduces data transmission latency and improves overall system efficiency. HBM4 supports multiple flexible data transfer modes, including direct read and write between GPU and HBM, data migration between HBM and LPDDR, and GPU indirect access to LPDDR through HBM. The introduction of dual command execution capability further enhances the efficiency of multitasking and provides strong support for complex AI workloads. HBM5: Breakthrough in 3D Near Memory Computing HBM5 pushes 3D Near Memory Computing technology to new heights. By integrating NMC processor die and cache die, and using dedicated TSV interconnects and power networks, HBM5 achieves a highly energy-efficient computing architecture. The introduction of distributed power sources/grounding and thermal TSV arrays effectively reduces IR voltage drop and improves heat dissipation efficiency. Of particular note is the introduction of AI design agent optimization technology in HBM5, which utilizes intelligent algorithms to optimize TSV layout and decoupling capacitor placement, significantly reducing power supply noise induced jitter (PSIJ). This innovation not only enhances system stability, but also lays the foundation for the intelligent design of subsequent products. HBM6: Innovation in Multi Tower Architecture The biggest highlight of HBM6 is the introduction of the Quad Tower architecture. Four DRAM stacks share a basic die, achieving an astonishing bandwidth of 8 TB/s through 8096 I/O channels. This architecture design not only improves bandwidth performance, but also enhances cost-effectiveness through resource sharing. The integration of L3 cache is another important innovation of HBM6. By reducing the need for direct access to HBM, L3 caching significantly improves the inference performance of LLM. Actual test data shows that the L3 cache embedding of HBM6 reduces HBM access by 73% and latency by 87.3%. The introduction of a crossover switch network enables HBM cluster interconnection, optimizing the high throughput and low latency LLM inference performance. HBM7: Hybrid Storage Ecosystem HBM7 has built a complete hybrid storage ecosystem. By integrating high bandwidth flash memory (HBF), a HBM-HBF storage network is formed with a total capacity of 17.6 TB, which can meet the storage needs of large-scale AI inference. The combination with 3D stacked LPDDR further expands the storage hierarchy, achieving an interconnect bandwidth of 4096 GB/s on the glass intermediate layer. The comprehensive application of embedded cooling structure is an important feature of HBM7. Efficient heat transfer from the chip to the cooling fluid has been achieved through thermal transmission lines and fluid TSV technology. The introduction of LLM assisted interactive reinforcement learning (IRL) technology makes decoupling capacitor placement and PSIJ optimization more intelligent and precise. HBM8: In the era of full 3D integration, HBM8 represents the pinnacle of HBM technology, achieving true full 3D integration and HBM center computing. The double-sided intermediate layer design supports various 3D extension architectures such as GPU-HBM-HBM, GPU-HBM-HBF, and GPU-HBM-LPDDR, providing flexible configuration options for different application scenarios. The fully 3D GPU-HBM integrated architecture is the core innovation of HBM8, with the GPU located at the top of the storage stack, which not only facilitates heat dissipation but also achieves seamless integration of storage and computing. The comprehensive application of AI design agents makes 3D layout and routing optimization more intelligent, considering the collaborative optimization of thermal signal integrity. From the overall development trend, the evolution of HBM technology shows a significant leap in magnitude. In terms of bandwidth, there has been an astonishing 32 fold increase from HBM4's 2.0 TB/s to HBM8's 64 TB/s. This breakthrough is mainly achieved through two dimensions: first, a significant increase in the number of I/Os, from 2048 to 16384; The second is the steady increase in data rate, from 8 Gbps to 32 Gbps. In terms of capacity expansion, the single module capacity has been increased from 48 GB for HBM4 to 240 GB for HBM8, achieved through an increase in stacking layers and single die capacity. At the same time, the power consumption gradually increased from 75W to 180W. Although the power consumption has increased, considering the significant improvement in performance, the overall energy efficiency ratio still shows significant improvement.   Key technological innovation path   Another significant feature of the evolution of HBM technology is the continuous breakthrough of 3D integration technology. Starting from HBM4, the technological roadmap gradually transitioned from traditional micro bump bonding to non bump Cu Cu direct bonding technology. This transformation not only significantly reduces contact resistance, but also greatly increases interconnect density, laying the foundation for subsequent high-density 3D stacking. TSV (Through Silicon Via) technology, as the core of 3D integration, enables efficient electrical connections between vertically stacked bare chips. By shortening the interconnect length, TSV technology effectively reduces RC latency and power consumption, providing hardware support for high bandwidth data transmission. At the HBM8 stage, the introduction of coaxial TSV technology further enhances signal integrity and supports high-speed data transmission at 32 Gbps. The development of intermediary technology is also remarkable. From a single silicon intermediate layer to a silicon glass hybrid intermediate layer, this innovation breaks through the size limitation of pure silicon intermediate layers while maintaining excellent signal integrity. The hybrid intermediate layer technology combines the high bandwidth characteristics of silicon intermediate layers with the large-scale scalability of glass intermediate layers, providing technical support for complex multi tower architectures. It is worth noting that with the continuous improvement of HBM performance, heat dissipation has become a key bottleneck restricting technological development. The HBM technology roadmap presents a clear evolution path for cooling technology, gradually upgrading from traditional air cooling to more advanced cooling solutions. HBM4 adopts direct cooling liquid cooling (D2C) technology, which directly cools the chip with liquid, and has higher heat dissipation efficiency compared to traditional air cooling. At the HBM5 and HBM6 stages, immersion cooling technology became mainstream, immersing the entire module in insulating coolant to achieve more uniform and efficient heat dissipation. The most advanced is the embedded cooling technology used in HBM7 and HBM8, which achieves chip level precision cooling through fluid TSV (F-TSV) and microchannel structure. This technology transfers heat directly from the HBM die to the cooling fluid through a heat transfer line (TTL), achieving unprecedented heat dissipation efficiency. Of course, the evolution of HBM technology has brought significant performance improvements. In terms of LLM inference, the four tower architecture of HBM6 has increased the inference throughput of the LLaMA3-70B model by 126%. In terms of energy efficiency, the NMC architecture of HBM7 reduces data movement, resulting in a power consumption reduction of over 30% for GEMM workloads. The improvement of system level scalability is also remarkable. The full 3D architecture of HBM8 supports multiple GPU-HBM clusters, with a total bandwidth of up to 1024 TB/s, providing powerful storage support for Exascale computing. These performance improvements not only meet the current needs of AI applications, but also lay the technological foundation for future artificial general intelligence (AGI).   Write at the end   From customized HBM to hybrid bonding, from next-generation intermediaries to converged storage architectures, HBM technology is accelerating its evolution, with an increasingly rapid pace of iteration. But in this highly complex technology competition, only players with a system level perspective and the ability to deeply integrate multidimensional processes and ecological resources have the opportunity to stand out. With SK Hynix handing over basic die foundry to TSMC, DRAM manufacturers' dominant ability in the HBM manufacturing process has gradually weakened. This technological system is no longer a task that a single vendor can accomplish alone, but a new battlefield that requires multi-party collaboration and cross-border integration. The answer to whether SK Hynix, Samsung, or Micron will have the upper hand in the future is still unknown. But what can be certain is that in the post AI era, the competition of HBM has just begun and will only intensify.
  • July 10, 2025
    NXP Li Xiaohe: Further strengthen China layout, "China definition" and industrial synergy
    Driven by the digital wave, China has continuously accelerated the deep integration of industrialization and digitization in recent years. The latest Action Plan for the Construction of Digital China 2025 issued by the National Data Bureau clearly states that by the end of 2025, the added value of the core industries of the digital economy will account for more than 10% of GDP, and the computing power scale will exceed 300EFLOPS. Artificial intelligence, industrial Internet, intelligent connected vehicles and other fields will become the core engine of semiconductor demand explosion. This strategic deployment not only injects policy dividends into the local semiconductor industry, but also attracts global leading enterprises to accelerate their layout. Recently, the "NXP 2025 Automotive Leadership Media Open Day" was held in Dalian. The reporter interviewed Li Xiaohe, Executive Vice President and General Manager of the China Business Unit of NXP Semiconductors, and conducted in-depth discussions on NXP's China strategy, local layout, and technological development trends.   picture Strengthening investment in the Chinese market is an inevitable choice   With the continuous promotion of industrialization and digitization, the development speed of China's technology industry has been very fast in recent years, which has effectively driven the high-speed development of the semiconductor field. According to WSTS data, the global semiconductor market is expected to reach $700.9 billion by 2025, a year-on-year increase of 11.2%, with China accounting for approximately 30%, making it one of the largest semiconductor consumer markets in the world. Customs import and export data also shows that from January to May 2025, China imported 231.5 billion pieces of integrated circuits, an increase of 8.4%, or approximately 156.8 billion US dollars; The export volume was 135.9 billion yuan, an increase of 19.5%, about 73.26 billion US dollars.   The huge market demand and constantly evolving innovation momentum have led more and more global semiconductor companies to attach great importance to China. Li Xiaohe emphasized the importance of the Chinese market in his speech at the "NXP 2025 Automotive Leadership Media Open Day". The Chinese market not only accounts for one-third of NXP's sales. More importantly, in recent years, we have seen many representatives of new quality productivity in China. For example, China now accounts for 70% of the world's electric vehicle production and sales, 76% of battery production comes from China, and 56% of the world's largest robotics company comes from China. This has an important promoting effect on the company's long-term development   Li Xiaohe believes that different companies have different development strategies due to their different "DNA", starting points, and core businesses. Based on the characteristics of NXP, over 50% of its business comes from automobiles. The automotive processing industry accounts for about 75% of total sales. The Chinese market accounts for 35% of the company's total sales. This means that automotive and industrial are the two most important industry markets for NXP, while China is the most important regional market.   In addition, since integrating Freescale in 2016, NXP has gradually developed into a strong system oriented enterprise, one of the few in the world that masters microprocessors, sensors, connection chips, analog chips, and security chips, and can organically combine functional safety and system safety.   Over the years, NXP has continuously strengthened its system capabilities in the hope of better empowering enterprises in the automotive, industrial, and other fields. China is not only a major global automotive and industrial market, but also the most important regional market for NXP, coupled with the characteristics of NXP's strong systems company. It is almost inevitable for NXP to strengthen its investment in the Chinese market.   Defining and designing products in China   In fact, NXP has been strengthening its investment in the Chinese market over the years. Data shows that NXP has been deeply involved in China for 39 years. So far, NXP has 6000 employees in China, including 1600 engineers. At the same time, NXP has built 6 research and development centers, 14 office locations, and the world's largest backend assembly and testing factory in China.   More noteworthy is that on January 1st of this year, NXP established its China business unit. This can be seen as an important step for NXP to strengthen its domestic strategy in China. According to Li Xiaohe's introduction, the China Business Unit is not only a sales entity, but also integrates capabilities such as sales, research and development, operations, quality, and technical support. This allows NXP to provide Chinese users with more competitive products, faster innovation cycles, better product optimization, and better research and development efficiency, which can better implement the concept of "in China, for China".   On this basis, Li Xiaohe further proposed the concept of "in China, for the world". The establishment of the China Business Unit also has a great promoting effect on this. Because with the innovative development of Chinese enterprises in the fields of automobiles, industry, etc., their competitiveness continues to strengthen. Now, China's automobile, industrial and other markets have led the global development, representing the strongest competitiveness in the world. We believe that success in China can also become success in the world   In cooperation with the construction of the China business unit, NXP has established product management, product definition and other teams in addition to technical support and production teams. Many critical products will be defined and designed in China. For example, the latest generation of battery management products released by NXP for the global market is defined in China. When the technical team of NXP China Business Unit is developing a project, they can combine research and development resources from both China and overseas. Overseas R&D engineers will also be integrated into the same project team to work with Chinese engineers on product development. The related project products are managed and defined in China, and China is the first to conduct rapid verification with customers.   It is reported that through this approach, NXP has launched several products, including the latest 18 channel lithium battery cell controller BMx7318/7518 series IC products. The related products are not only shipped in the Chinese market, but also widely accepted in overseas markets.   New trends provide new growth   As the automotive industry continues to promote the transformation towards intelligence and electrification, semiconductor technology has become the foundation and core driving this change. The global automotive semiconductor market is expected to exceed 65 billion US dollars by 2025, and the Chinese market will occupy nearly 30% of the global market share with a scale of 250 billion yuan, with an average annual compound growth rate of 11.6%. Behind this growth is the demand for technological iteration, where the penetration rate of new energy vehicles exceeds 50% and the penetration rate of L2+level intelligent driving systems exceeds 60%. As a global leader in the automotive semiconductor industry, NXP Semiconductors' initiatives have also received widespread attention from the industry.   However, NXP's gaze is not limited to the automotive aspect. Li Xiaohe pointed out, "We believe that in the next few years, China will see parallel and aggregated development of automobiles, humanoid robots, and low altitude economy. This is because the underlying technologies of these industries, such as functional safety, information security, and production and manufacturing supply chains, have many complementary aspects. The development speed of the robotics and low altitude economy fields will be faster than that of normal initial industries because they can be empowered by electric vehicles   The automotive industry and the industrial sector are also complementary. The degree of automation and intelligence in industrial manufacturing is increasing, and AI technology is being applied more and more. End side AI has penetrated into the industrial field, and many solutions are interconnected with the automotive industry, such as low power consumption, real-time performance, safety verification, functional safety, reliability, etc.   The related technologies will also extend into fields such as healthcare and smart homes. The demand for low-power technology in smart wearable devices is very high, and smart homes have a large demand for secure interconnection. In the future, more and more devices such as automobiles, home appliances, mobile phones, and smart wearables will be interconnected, forming a new ecosystem. Cars will surpass people's living rooms and offices, becoming the most technology intensive area with fixed time, fixed location. Automobiles, based on technologies such as new energy and high-performance computing, will integrate more scenarios such as health diagnosis and human-computer interaction, becoming people's second office, second smart home, and even recuperation places. And the underlying technologies for all of this are low power consumption, functional safety, system safety, real-time performance, and other technologies. There will be a lot of untapped space in the future.   Meanwhile, Li Xiaohe believes that in such a development process, the Chinese market will still be at the forefront. NXP will also integrate more deeply into the Chinese market, join forces with ecological partners, and jointly empower the industry.

Frequently Asked Questions

Question: How do you ensure the quality of the domestic chips you distribute?
Answer: We work with chip manufacturers that have strict quality control systems in place. All chips undergo multiple rounds of testing at the manufacturing stage, including electrical performance testing, reliability testing, and environmental testing. Before delivery, we also conduct sampling inspections to ensure that the products meet our quality standards. Additionally, we offer a quality guarantee period during which we will handle any quality-related issues promptly.
Question: What does the warranty policy for your domestic chips cover?
Answer: Our domestic chips come with a standard warranty period. During this time, if the chip fails due to manufacturing defects, we will provide free repair or replacement services. The warranty does not cover damages caused by improper use, unauthorized modifications, or external factors such as electrical surges or physical damage. To initiate a warranty claim, please contact our customer service team and provide detailed information about the problem and the chip's serial number.
Question: What kind of technical support can I get from you after purchasing your chips?
Answer: Our technical support team consists of experienced engineers who are proficient in chip technology. We offer pre-sales technical consultation to help you select the most suitable chips for your applications. After-sales, we provide assistance in chip integration, debugging, and performance optimization. You can reach out to our technical support hotline or email for any technical issues, and we will respond promptly.
Question: How can I be sure that your domestic chips are compatible with the existing systems and components in my project?
Answer: Our domestic chips are designed with broad compatibility in mind. Before you make a purchase, our technical team can offer in-depth consultations. We will analyze your specific system requirements, including interface types, power consumption, and operating frequencies, and then recommend the most suitable chips. Additionally, we have a library of technical documentation and case studies that showcase successful integrations with a wide range of systems and components, which can help you assess compatibility.
Question: How can I ensure a stable supply of your domestic chips, especially during peak demand periods?
Answer: We maintain close partnerships with multiple domestic chip manufacturers. Through long-term cooperation agreements and inventory management strategies, we strive to meet the demand of our customers. We also closely monitor market trends and adjust our procurement plans in advance to ensure a stable supply. In case of unexpected situations, we will promptly communicate with you and provide alternative solutions.

Latest know-How Articles

Blog Continental Group collaborates with Novesense to create safer automotive pressure sensor chips
Continental Group collaborates with Novesense to create safer automotive pressure sensor chips   On October 24, 2024, the 2024 Continental China Experience Day, hosted by Continental Group, was held in Gaoyou City, Jiangsu Province. Nearly 200 guests from the upstream and downstream of the automotive industry chain were invited to attend the conference and engage in in-depth dialogue on the collaborative development and future trends of the automotive industry, jointly exploring future market forms and opportunities. Wang Shengyang, founder, chairman, and CEO of Novosense, and Dr. Zhao Jia, director of Novosense Sensor Product Line, were invited to attend. During the event, Novosense and Continental Group announced a strategic partnership to jointly develop automotive pressure sensor chips.   In this collaboration, both parties will focus on jointly developing automotive grade pressure sensor chips with functional safety features. The newly developed pressure sensor chip will be based on Continental's next-generation global platform, with a focus on improving reliability and accuracy. It can be used to achieve safer and more reliable systems for automotive airbags, side collision monitoring, and battery pack collision monitoring.
Blog ovosense micro car specification level 4/8-way half bridge drive NSD360x-Q1
Novosense micro car specification level 4/8-way half bridge drive NSD360x-Q1: multi load compatibility, enhancing the flexibility of automotive domain control systems     The Novosense NSD3604/8-Q1 series multi-channel half bridge gate driver chip covers 4/8 half bridge drivers and can drive at least 4 DC brushed motors, achieving multi-channel high current motor driving. It can also be used as a multi-channel high side switch driver. Very suitable for multi motor or multi load applications, such as car window lifting, electric seats, door locks, electric tailgates, and proportional valves for body control applications.     ◆ Wide operating voltage: 4.9V-37V (maximum 40V) ◆ 4, 8-channel half bridge gate drive ◆ Configurable timing charge discharge current drive (CCPD), optimized EMC performance ◆ Integrated 2-level charge pump for 100% PWM ◆ Integrated 2-channel programmable wide mode op amp  
Blog National Technology Invited to Participate in 2024 Intel
Draw a blueprint together! National Technology Invited to Participate in 2024 Intel ®  LOEM Summit November 5-7, 2024, Intel 2024 ®  The LOEM Summit was grandly held in Bangkok, Thailand, and National Technology Co., Ltd. (hereinafter referred to as "National Technology"), as Intel's global partner, was invited to participate in the summit. This summit provides an important platform for 200 Intel business partners from around the world to enhance communication and connection, share development experiences, and actively explore new opportunities in the future. Taking this opportunity, National Technology showcased its fourth generation trusted computing chip NS350, high-precision metering battery management chip NB401, and related application cases at the summit, showcasing its product capabilities.   NS350 is the fourth generation trusted computing chip of National Technology, which has advantages such as high security, high performance, and great value. It is designed based on 40nm process, supports I2C and SPI interfaces, and provides packaging forms such as QFN32 and QFN16. It complies with China's TCM2.0 trusted password module standard (GM/T 0012-2020) and the international TPM2.0 (Spec 1.59) trusted computing standard. The chip has passed the CC security function testing and security assurance assessment by the international third-party authoritative testing agency THALES/CNES, and has obtained the CC EAL4+certification certificate issued by the French National Agency for Information Systems Security (ANSSI). The chip is compatible with international mainstream operating systems such as Windows, Linux, BSD UNIX, as well as domestic operating systems such as Galaxy Kirin, Tongxin, Fangde, and Shenzhou NetEase Government Edition Windows. It can be used in fields such as PC, server platforms, and embedded systems to protect information system security and effectively resist various attacks from the network. The national technology collaborative negative electrode material business develops electrochemical battery measurement algorithms, with core technological advantages supporting battery safety measurement and industry-leading high-precision SOC measurement algorithms. It provides AFE, MCU, BMS, and algorithm overall solutions for the consumer, industrial, and automotive electronics fields.   NB401 is a high-precision metering battery management chip launched by National Technology for the consumer market. The product integrates a high-precision power calculation method and has multiple functions such as battery monitoring, metering, protection, and certification. It can support the management and metering of 2-4 series of lithium-ion batteries or lithium polymer batteries. The chip integrates two 16 bit high-precision ADCs for voltage (or temperature) and current acquisition, as well as hardware protection and wake-up functions. It supports SMBus communication, intelligent charging management, and multiple safety certifications, with ultra-low power consumption characteristics, which can meet the needs of most battery management or metering applications in the consumer electronics field. It is suitable for battery pack applications in electronic devices such as laptops, tablets, mobile phones, cameras, drones, power tools, and power banks.

Need Help? Chat with us

leave a message
For any request of parts price or technical support and Free Samples, please fill in the form, Thank you!
Submit

Home

Products

whatsApp

contact