Chinese chips
  • Chip tariff, 300%!
    Chip tariff, 300%!
    On the 15th, US President Trump stated that he will impose semiconductor tariffs within the next two weeks, with rates possibly as high as 200% or even 300%, symbolizing his readiness to intensify efforts to force chip manufacturing to return to the United States. According to reports, while flying to Alaska to meet with Russian President Putin, Trump said on Air Force One, "I will impose tariffs on steel and chips next week and the following week." Trump has repeatedly promised that tariffs on chips and drugs will be implemented within a few weeks, but has not yet announced them. The US Department of Commerce has launched an investigation into the chip and pharmaceutical industries since April, which is a prerequisite for Trump's imposition of tariffs on national security grounds. This program can be quite complex, and investigations often take months or even longer to complete. Manufacturers and artificial intelligence (AI) companies have been eager for clearer semiconductor tariff plans as chips are widely used in various modern consumer products. Last week, Trump announced at an event with Apple CEO Cook that he plans to impose a 100% tariff on semiconductors, but if manufacturers relocate production to the United States, their products will not be affected. The White House has not further explained how the exemption mechanism operates, but Trump has hinted that Apple, which has committed to a $600 billion Made in America program, may receive an exemption. On the 15th, Trump pointed out that semiconductor tariffs will initially be set at a lower level, allowing companies time to set up factories and build production capacity in the United States. After a period of time, the tariff rate will be significantly increased, possibly reaching 200% or 300%. Trump said in an uncertain tone, 'Will the tax rate I want to set be 200% or 300%?' He said he is confident that companies will set up factories in the United States instead of choosing to pay high tariffs.
    - August 16, 2025
  • Micron announces termination of mobile NAND development
    Micron announces termination of mobile NAND development
    In response to the recent layoffs in China by Micron, Micron has officially responded to the CFM flash memory market:   Given the continued weak financial performance of mobile NAND products in the market and the slower growth compared to other NAND opportunities, we will cease the development of future mobile NAND products globally, including terminating the development of UFS5.   Micron stated that this decision only affects the development of global mobile NAND products. Micron will continue to develop and support other NAND solutions, such as SSDs and NAND solutions for the automotive and other end markets.   Micron will continue to develop and support the mobile DRAM market globally, and provide an industry-leading DRAM product portfolio. In recent years, as large technology companies have increased their investment in artificial intelligence data centers, orders for high bandwidth memory (HBM) chips from semiconductor manufacturers such as Micron have surged due to their powerful data processing capabilities.   The company currently expects quarterly revenue of $11.2 billion, with a fluctuation of $100 million, compared to the previous forecast of $10.7 billion, with a fluctuation of $300 million.   Micron also raised its adjusted gross profit margin forecast for the fourth quarter to 44.5%, with a fluctuation of 0.5%, compared to the previous forecast of 42%, with a fluctuation of 1%.   The company stated that the revised forecast reflects an improvement in pricing, particularly for dynamic random access memory or DRAM products.   Sumit Sadana, Chief Commercial Officer of Micron, said at an industry conference on Monday, "We focus on all different end markets around the world, and price trends have been strong. We have achieved great success in raising prices   EMarketer analyst Jacob Bourne stated that supply constraints in HBM production and strong demand for artificial intelligence have enabled Micron to set higher prices for its products, representing a shift in memory chip manufacturers' history towards thinner profit margins.   In June of this year, Micron announced that it would increase its investment in the United States by $30 billion, bringing the total to $200 billion
    - August 13, 2025
  • Trump: Chen Liwu must resign immediately! Intel responds!
    Trump: Chen Liwu must resign immediately! Intel responds!
    US President Trump has stated that the CEO of Intel has a serious conflict of interest and must resign immediately. There is no other solution to this problem.   This statement also cast a heavy shadow over the future direction of Intel, which is currently undergoing changes, directly leading to a nearly 5% drop in Intel's stock price in pre-market trading.   Futurum Group's Chief Semiconductor Analyst Ray Wang said, "This is a crisis point for how Intel will handle its relationship with the US government in the future." He warned that Trump's interference could make the situation worse. Phil Blancato, CEO of Ladenburg Thalmann Asset Management, a financial company, expressed concern that the incident could set a bad precedent, saying, 'You don't want the President of the United States to decide who will manage the company.'. According to the data, Chen Liwu was born in Malaysia in 1959, grew up and received education in Singapore, graduated from Nanyang University with a Bachelor's degree in Physics, and then majored in Nuclear Engineering at the Massachusetts Institute of Technology in the United States, obtaining a Master's and Doctoral degree. He later founded the venture capital company Walden International.   In 2017, the data analysis company Relationship Science selected Chen Liwu as one of the most connected executives in the technology industry and gave him a perfect score of 100 in the "Power Rating"   65 year old Chen Liwu took over as CEO of Intel from his predecessor Pat Gelsinger in March this year, and he is also the first Chinese CEO in Intel's history. After taking office, he proposed a series of reform measures, hoping to once again turn Intel into a customer-centric and engineering centric company. In order to achieve this goal, Chen Liwu streamlined about 50% of the management hierarchy, making management more flat, while divesting non core businesses and initiating layoffs, thus focusing resources more on core businesses. Including plans to reduce the company's workforce to 75000 by the end of the year, a decrease of approximately 22%. In addition, Intel has promised to adopt a more cautious strategy in manufacturing investments. Intel once dominated the field of chip manufacturing, but in recent years it has lagged behind its Taiwan, China rival TSMC in manufacturing technology, and has almost no share in the AI chip market, which is currently dominated by Nvidia. When Chen Liwu took over Intel, the industry generally believed that with his abilities, he could save Intel from the current "crisis". At present, Chen Liwu has only been serving as CEO of Intel for less than 5 months, and the reform of Intel is still actively underway. If Chen Liwu is forced to resign at this time, it will obviously be extremely unfavorable for Intel's future development. At the same time, this controversy undoubtedly adds new uncertainty to Intel's ongoing strategic transformation. In response to this, Intel issued a statement rejecting Trump's call for CEO Chen Liwu to resign and promising to make "significant investments" in the president's "America First" agenda.   Intel, the Board of Directors, and Chen Liwu are firmly committed to advancing the national and economic security interests of the United States and making significant investments in accordance with the President's America First Agenda, "Intel said in a statement released on Thursday Intel has been manufacturing in the United States for 56 years. We will continue to invest billions of dollars in domestic semiconductor research and manufacturing, including our new factory in Arizona, which will adopt the most advanced manufacturing process technology in the United States, and we are the only company to invest in leading logic process node development in the United States. We look forward to continuing to work with the government   In addition, regarding Trump's request for Intel CEO Chen Liwu to resign, Stacy Rasgon, a senior analyst at Bernstein Research, commented, "Chen Liwu's activities in China are not secret. He is a legendary figure in the semiconductor industry." "Intel is in a difficult situation, and when he accepted this job, they were in a difficult position, which is part of the reason why he accepted it. And if he turns losses into profits, it will be a difficult process Bernstein analyst Stacy Rasgon believes that Trump's dissatisfaction may stem from Chen Liwu's failure to establish a personal relationship with him, as well as Trump's possible dissatisfaction with Intel's cuts in capital expenditures and wavering in cutting-edge processor manufacturing. Intel is a significant beneficiary of the Chips and Science Act funding, having received nearly $8 billion in grants. However, the Trump administration has been trying to use these grants to encourage businesses to commit to more investment. In addition, Intel announced this year that it will postpone the production of its chip manufacturing plant in Ohio until at least the 2030s, which may contradict the government's goal of expanding semiconductor production capacity in the United States. Trump's close ally, Ohio Republican Senator Bernie Moreno, also called for Chen Liwu to resign. Chen Liwu is not the first corporate leader to resign after a conflict with Trump. One important reason for Chen Liwu's employment is his profound industry expertise and network. If he resigns, it is currently unclear who can replace him.
    - August 08, 2025
  • 2nm technology, transferred to the United States!
    2nm technology, transferred to the United States!
    TSMC has ambitious plans in the United States, one of the most important of which is to produce cutting-edge N2 technology to help the country take a leading position in the chip industry. TSMC's cutting-edge chip technology is entering the United States and is currently unstoppable. Since the Trump administration took office, TSMC has seen great interest in the region, partly due to the "tariff threat" faced by this chip giant.   According to reports, TSMC is currently preparing a 2-nanometer production line at its Fab P3 factory in Arizona. It is reported that the plant is under construction and may be put into production as early as 2026, which is nearly a year later than the production line in Taiwan, China. Establishing chip factories in the United States has clearly become a trend among companies, mainly because the US government is making the bill a part of tariff agreements with countries such as South Korea. For TSMC, the company's start was indeed difficult, especially with many issues in culture and logistics. However, now, with the support of increased investment, this chip company seems to be more determined in its ambitions in the US market. TSMC is indeed aggressively attacking the US chip supply chain, indicating its desire to dominate this market. Due to the need for large technology companies to maintain close contact with the current US government, NVIDIA、 Companies such as Apple and AMD are ready to transfer their supply chains to the United States through hundreds of billions of dollars in investments, ranging from establishing production facilities to assisting supply chain partners in the transition. Since all these companies currently rely on the chips of TSMC, it is crucial for the Taiwan, China giant to establish a strong chip network in the United States. By transferring the 2-nanometer process to the United States, TSMC has indeed chosen its next business location. It is expected that in a few years, the domestic demand for chips in the United States will increase significantly, and this achievement is mainly attributed to the current government's prioritized self-reliance policy. The future development of TSMC relations in the United States is worth paying attention to, but currently, everything depends on the United States
    - August 03, 2025
  • Global chip foundry grows by 17%!
    Global chip foundry grows by 17%!
    On July 28th, according to the latest report from market research firm Counterpoint Research, the global pure semiconductor wafer foundry industry's revenue is expected to reach $165 billion by 2025, a year-on-year increase of 17%. During the period of 2021-2025, the compound annual growth rate of this industry is 12%. This growth is mainly due to the promotion of advanced process nodes. Among them, the revenue of 3 nanometer nodes is expected to increase by over 600% year-on-year, reaching $30 billion, while the revenue of 5/4 nanometer nodes will exceed $40 billion. These advanced nodes will contribute more than half of the total revenue of pure wafer fabs by 2025. The report points out that the increasing demand for high-end smartphones, AI PC solutions, AI ASICs, GPUs, and high-performance computing (HPC) solutions is the main factor driving the growth of advanced process revenue. In terms of corporate competition, TSMC has an advantage in advanced nodes, followed closely by Samsung and Intel. At the same time, UMC, Gexin, and SMIC still have strong demand in other nodes, although they may not be able to keep up with the pace of advanced nodes in terms of revenue growth rate. In addition, the backend packaging process is constantly innovating and generating revenue. For example, technologies such as HBM memory integration and migration to chip level packaging are bringing new growth opportunities to the industry. These innovations not only enhance the performance and reliability of products, but also open up new sources of revenue for semiconductor foundries.
    - July 29, 2025
  • SK Hynix achieves record high performance! Operating profit increased by 68%
    SK Hynix achieves record high performance! Operating profit increased by 68%
    According to SK Hynix's latest financial report, the operating profit for the second quarter was KRW 9.21 trillion, a year-on-year increase of 68.5%, exceeding analysts' expectations of KRW 8.93 trillion; Revenue was KRW 22.23 trillion, a year-on-year increase of 35.4%, exceeding analysts' expectations of KRW 20.56 trillion. In addition, on a quarterly basis, the second quarter revenue increased by 26% compared to the previous quarter, and the operating profit increased by 24%. SK Hynix stated, "As major global technology companies actively invest in the field of artificial intelligence (AI), the demand for AI oriented memory continues to grow. The company's DRAM and NAND flash memory shipments have exceeded expectations, resulting in the highest historical performance. ”The company also stated that it has expanded the sales of 12 layer HBM3E products in the DRAM business field, and increased sales in various application areas of NAND flash memory. With the highest level of competitiveness in the AI oriented storage industry and profit oriented business activities, it has continued its good performance trend. ” Thanks to this performance achievement, as of the end of the second quarter, cash and cash equivalents reached KRW 17 trillion, an increase of KRW 2.7 trillion compared to the previous quarter. The debt to net debt ratio was 25% and 6% respectively, with a significant decrease of KRW 4.1 trillion in net debt compared to the previous period. In addition, while customers increased their storage orders in the second quarter, they also increased their finished product production and maintained a stable inventory level. SK Hynix predicts that new products from customers will be launched soon in the second half of the year, and storage demand will continue to grow. SK Hynix will double the year-on-year growth in the HBM field based on its existing HBM3E product performance and mass production capabilities, thereby achieving stable performance results. HBM4 also plans to prepare timely supply according to customer requirements and continuously strengthen its competitiveness at the highest level in the industry.
    - July 24, 2025
  • Top 50 Semiconductor Suppliers by 2025
    Top 50 Semiconductor Suppliers by 2025
    The ranking of the top 50 semiconductor suppliers by 2024 in the McLean Report is shown in Figure 1. This ranking is based on calendar year sales and covers integrated circuit (IC) and O-S-D (optoelectronics, sensors/actuators, and discrete devices) equipment. If the company's fiscal year does not match the calendar year, the sales amount has been adjusted to January to December. The list includes 19 US headquarters enterprises, 11 Japanese enterprises, 7 suppliers in Taiwan, China, 6 European enterprises, 4 suppliers in Chinese Mainland and 3 Korean enterprises.   picture Figure 1 Top 50 Semiconductor Sales Leaders by 2024 (including foundries, in millions of US dollars) In 2024, the total semiconductor sales of the top 50 suppliers will increase by 26%, which is 4 percentage points higher than the 22% growth in the global semiconductor market. Winbond, Powerchip and Vanguard returned to the top 50 list, and CXMT, a DRAM supplier from Chinese Mainland, performed amazingly when it was first shortlisted, ranking 34th.   Overall, 44 out of the top 50 companies have experienced changes in their rankings. The largest increase was seen by fabless IC supplier Realtek, which jumped 10 places to 29th place thanks to its strong sales of connected media and network ICs. In addition, Monolith Power Systems rose 5 places to 37th place due to a surge in sales of power management ICs for AI and server applications.   The company with the most significant decline in ranking in 2024 is Sharp, which dropped 10 places to 43rd place; Microchip has slipped from 19th place in 2023 to 25th place in 2024.   The ranking of the McLean Report covers integrated device manufacturers (IDMs), fabless companies, and foundries, but does not include semiconductor sales data from system vendors such as Apple, Amazon, Google, Meta, etc. These companies design their own chips and specialize in their own systems, without selling ICs to the public market.   The top 50 rankings in Figure 1 also include 8 pure OEM factories. The McLean Report includes foundries in the top 50 semiconductor suppliers ranking, as it consistently considers this ranking as a "top supplier list" rather than a "market share ranking," and explicitly points out that some semiconductor sales may be subject to double counting.   Figure 2 Top 50 semiconductor suppliers excluding foundries The following companies will replace the 8 pure OEM factories:   US and Taiwan Semiconductor (Dior), $1.31 billion   Taiwan, China Nanya, US $1.06 billion Socionext, Japan, $1.03 billion Chinese Mainland GigaDevice, 1.02 billion US dollars Nuvoton, Taiwan, China, US $981 million Semtech, USA, $890 million Sanken Electronics, Japan, 832 million US dollars Taiwan, China based Macronex, 806 million US dollars picture Figure 2 Top 50 Semiconductor Sales Leaders by 2024 (excluding foundries, in millions of US dollars)   As shown in Figure 3, in 2024, US headquarters companies accounted for 57% of the top 50 semiconductor suppliers' sales excluding foundries, with Korean companies ranking second with a 21% share. The reason why the Korean market share is particularly impressive is that its 21% share is contributed by only three companies - Samsung, SK hynix, and LX Semicon. It is worth noting that, if the pure OEM factory is not included, enterprises in Taiwan, China account for 5% of the market share; However, if the sales of pure OEM factories are included, the proportion of Taiwan, China in the top 50 suppliers will jump to 17%.   picture Figure 3 Top 50 semiconductor suppliers by headquarters location before 2024 (excluding foundries) The market share of South Korea is particularly likely to change year by year due to the cyclical fluctuations in the storage market. In 2024, South Korea will occupy 21% of the market share, an increase of 5 percentage points from 16% in 2023, thanks to the strong growth of DRAM sales of 88% and the growth of NAND flash memory market of 69%.   Supplier share of semiconductor market in 2024 Semiconductor sales are dominated by a few large enterprises, and this trend is more evident in 2024 than ever before. Excluding foundries, the top 10 leading companies in 2024 account for two-thirds of semiconductor sales, an increase of nearly 20 percentage points compared to the top 10 companies in 2010 (Figure 4). The 50 largest suppliers set a record, accounting for 92% of global semiconductor sales.   picture Figure 4 Supplier share of semiconductor market in 2024 (excluding foundries, totaling $676.6 billion) picture Top 50 companies ranked by growth rate The sales growth rate of the top 50 suppliers spans 260 percentage points, from a 214% increase in CXMT to a 46% decline in Sharp (Figure 5). Among the top 50 companies, 23 achieved sales growth, with 17 experiencing double-digit growth. Meanwhile, in 2024, 27 companies experienced a decline in sales, ranging from 2% for Intel and Powerchip to 46% for Sharp.   picture Figure 5 Leading semiconductor sales companies ranked by growth rate in 2024 Leading semiconductor sales companies in the first quarter of 2025 and outlook for the second quarter The ranking of leading semiconductor sales companies in the first quarter of 2025 in the McLean Report is shown in Figure 6. When necessary, sales have been accounted for based on the first quarter of the calendar year (January to March).   picture Leading semiconductor sales companies in the first quarter of 2025 and outlook for the second quarter   As expected, NVIDIA easily maintained its position as the world's largest semiconductor supplier in the first quarter of 2025, with sales increasing by 23% month on month and 89% year-on-year. Although the company and its products have become targets of the US government's tariff policies towards China, demand for its data center artificial intelligence processors remains strong. CEO Huang Renxun bluntly stated that the sales restrictions on Nvidia processors in the United States have failed to achieve their goals and have caused more harm than good to the US semiconductor industry. He estimates that US export controls have caused the company to lose at least $15 billion in sales revenue, which was originally intended for the research and development of next-generation artificial intelligence processors. Even if the rapid growth of sales this year slows down, Nvidia may still remain at the top of the list for the rest of 2025.   In the first quarter of 2025, only Nvidia and MediaTek, ranked 10th among the top 10 suppliers, achieved month on month revenue growth. MediaTek stated that the sales growth is mainly due to increased market demand and increased customer adoption of artificial intelligence, 5G, and Wi Fi 7 technologies.   Broadcom, ranked 8th, had flat sales in the first quarter, but all other top 10 companies experienced a decline in sales, ranging from AMD's 1% decline to Samsung's 20% decline. Sales of Samsung and SK Hynix have declined due to weak seasonal demand and easing price pressure on DRAM and NAND flash memory devices.   Similarly, Intel's 11% revenue decline is partly attributed to seasonal weakness, but it also indicates that Intel still needs a lot of work to reshape consumer and original equipment manufacturer (OEM) confidence in its current (and future) product lineup and roadmap. Intel is counting on its industry-leading 18A process to provide a much-needed boost to its profitability in the second half of 2025, while continuing to seek new external customers to drive the development of its foundry business.   Overall, the ranking of the top 10 companies has not changed compared to the previous quarter, but there have been some adjustments in the downstream rankings of the list. Despite a 9% decrease in sales, NXP's ranking still rose two places to 13th place. STMicroelectronics (ST) fell from 13th to 16th place after a 24% decrease in revenue. ST stated that the delayed recovery of industrial applications and inventory adjustments, as well as the slowdown in automotive IC sales (especially in Europe), led to weak sales in the first quarter. Kioxia's sales decreased by 35%, dropping three places to 19th place in the first quarter ranking.   Among the top 25 companies, 8 achieved quarter on quarter sales growth in the first quarter of 2025, 14 suppliers experienced a decline in sales, and Broadcom's sales remained stable.   After showing strong quarterly revenue performance at the end of 2024, most storage vendors on the list experienced a general decline in sales in the first quarter. Samsung, SK Hynix, SanDisk, and Kaixia all experienced double-digit revenue declines in the first quarter of 2025, while Micron's sales declined by 2%.   After experiencing a difficult 2024, some microcontroller (MCU) and analog device suppliers can claim that the darkest days are over. Texas Instruments (TI), Analog Devices, and Renesas all achieved a month on month growth in modem sales in the first quarter of 2025. In addition, TI and Adeno expect to continue achieving single digit growth in the second quarter.   19 out of the top 25 suppliers have provided revenue guidance for the second quarter of 2025. Nvidia (9%), Micron (9%), STMicroelectronics (8%), TSMC (7%), and Texas Instruments (7%) are expected to see a significant rebound in sales in the second quarter. However, the average guidance of the top 25 companies shows a 3% month on month increase in sales in the second quarter.   The companies providing Q2 revenue guidance have been cautious in their outlook, acknowledging market uncertainty, potential impact of tariffs, ongoing regional conflicts, high interest rates, and other geopolitical tensions that may lead to sudden changes in forecasts.   Sales in the first quarter decreased by 2% compared to the previous quarter, and with an average quarterly guidance of 3% in the second quarter, this further supports TechInsights' semiconductor market forecast that annual growth will slow down due to the "mild impact" of tariffs and trade restrictions policies.
    - July 22, 2025
  • HBM, Will collapse?
    HBM, Will collapse?
    In the eyes of most people, the global frenzy over artificial intelligence sovereignty has led to competition for GPUs, which in turn has driven demand for HBM. In a report released in March this year, renowned analysis firm Yole stated that since the emergence of ChatGPT at the end of 2022, generative artificial intelligence has flourished, driving an unprecedented 187% year-on-year increase in HBM bit shipments in 2023, and a surge of 193% in 2024.   It is expected that this growth momentum will continue. The growth rate of HBM far exceeds the overall DRAM market. Global HBM revenue is expected to grow from $17 billion in 2024 to $98 billion in 2030, with a compound annual growth rate of 33%. ”Yole continued in the report. The latest revenue data from the three major storage giants shows that HBM is indeed on the track predicted by Yole. As a new DRAM leader, driven by the surge in demand for high bandwidth memory (HBM), SK Hynix expects its operating profit to reach nearly 9 trillion Korean won (6.6 billion US dollars) in the second quarter, and HBM sales are expected to account for over 50% of SK Hynix's total DRAM revenue this year, up from over 40% in the fourth quarter of 2024. Another HBM supplier, Micron, also achieved a new high in performance driven by HBM. But recently, analysis agencies have issued a warning to HBM. Goldman Sachs: HBM will experience a significant drop According to a Goldman Sachs report cited by Taiwanese media, intensified competition and oversupply may lead to the first decline in HBM prices in 2026, posing a challenge to market leader SK Hynix. Goldman Sachs pointed out that HBM prices may experience a double-digit decline by 2026. Goldman Sachs warns that intensified pressure, intensified competition, and a shift in pricing power to major clients (in which SK Hynix has significant risk exposure) may squeeze the company's profit margins. In Goldman Sachs' view, the downward trend in HBM prices may be attributed to a significant increase in HBM chip supply from major manufacturers, which is expected to exceed demand and potentially push down the annual average selling price (ASP). Goldman Sachs stated that after years of tight supply, the HBM market is expected to experience weakness in 2026, which could lead to increased pricing pressure across the industry. Goldman Sachs also emphasized that NVIDIA's next-generation GPU Rubin will not increase its HBM capacity like the B300. Both GPUs will adopt a capacity of 288GB - Rubin will use the 12Hi HBM4, and B300 will use the 12Hi HBM3E. This means that the demand growth for GPU driven HBM is limited, which is not good news for NVIDIA's main HBM supplier SK Hynix. At the same time, Goldman Sachs also predicts that HBM's growth will significantly slow down - currently expected to increase by 25% year-on-year, compared to 45% previously. Goldman Sachs has revised its HBM Total Target Market (TAM) forecast, slightly increasing its 2025 forecast by 1% to $36 billion, but lowering its 2026 forecast by 13% to $45 billion (previously $51 billion). Korean analysts warn that SK Hynix's market share may shrink when the next generation HBM4 is launched in 2025. In addition, the report points out that although the lifting of export restrictions on NVIDIA H20 chips by the United States to China should boost HBM demand and thus help SK Hynix, it may also provide a boost to its competitors. The report also cites analysts' warnings that Samsung's HBM shipments may grow at an annual rate of 20% by 2026, which could directly put pressure on SK Hynix's profit margins. According to them, Chinese companies will also become new players in this market, bringing uncertainty to the HBM market. However, Jibang Consulting believes that with the continuous increase of HBM production capacity and the steady improvement of yield rates of various suppliers, the possibility of price reduction for mature products is unlikely. However, next year's focus will be on HBM4, which is still undergoing certification, so it is too early to determine the winner of the competition. Considering the release of the next generation HBM, TrendForce expects the overall average price of HBM to continue to rise. UBS: HBM sees breakthrough However, while Goldman Sachs is declining, UBS is highly optimistic about HBM in its latest report. UBS analysts stated in a recent report that as the demand for computing from artificial intelligence continues to reshape the memory landscape, high bandwidth memory (HBM) is expected to see a breakthrough year in 2026. Analysts point out that our channel research continues to indicate that SK Hynix is likely to gain a stable market share in the HBM market by 2026, accounting for approximately 50% of the total capacity. ”This emphasizes their expectation that Hynix will continue to control the next generation of memory, even as contract negotiations and competitors' ambitions continue to escalate. Despite some noise in the short-term memory market, particularly regarding negotiations between NVIDIA and high bandwidth memory suppliers such as SK Hynix, Samsung, and Micron Technologies, as well as the possibility of a "price correction" as Samsung approaches HBM3E certification, UBS believes that the real story is brewing. UBS reiterates. Hynix's leadership position in Nvidia will be maintained, and the recent design victories won by Google, AWS, and Microsoft's ASIC divisions indicate that Hynix will be locked in as the main or only HBM supplier in the industry. Even though competition is expected to intensify, it is unlikely to become a serious obstacle for Hynix before the end of 2026. In terms of pricing, with the addition of more suppliers, HBM3E still has some room for negotiation, but Hynix expects only a "slight to moderate decrease" in prices in 2026 compared to 2025. More importantly, with the HBM4 premium obtained through its first mover advantage, Hynix expects its HBM4 price to be about 40% higher than the upcoming generation product - even after a 50% increase in cost per bit. Overall, UBS predicts that the price per bit of hybrid HBM will increase by 18.5% year-on-year in 2026, driving HBM revenue to an expected $32.7 billion and accounting for over 70% of SK Hynix's operating profit. However, the development of HBM is not without risks. The delay in Samsung's capacity expansion may intensify competitive pressure this year, and the significant increase in HBM4 production costs may also lead to more intense price negotiations in the future. Investors are also closely monitoring capital expenditures, as Hynix's expansion plans will depend on the progress of Nvidia's next-generation Blackwell Ultra and Rubin product cycles. Despite recent uncertainties, UBS reiterates that the outlook for HBM in 2026 remains strong and expects Hynix to continue to dominate. HBM, How exactly is it? From the previous reports, we can see that the long short battle of HBM is unprecedentedly fierce. Therefore, let's take a look at the production capacity and technological layout of these three HBM giants to provide reference for everyone's HBM trend. According to analysts, Samsung Electronics and SK Hynix are expected to ensure a monthly HBM production capacity of approximately 150000 wafers by the end of 2025. 1. Samsung Electronics: Initially expected to produce 170000 wafers per month by the end of 2025, but reduced to 150000 wafers per month. Therefore, the shipment forecast has been lowered from 80 billion Gb to 60 billion Gb. 2. SK Hynix: Initially expected to produce 65000 wafers per month by the end of 2025, but raised to 150000 wafers per month. Additional expansion is planned for M15 x in 2026. 3. Micron expects to expand its production capacity to 25000 pieces per month by the end of 2024, 65000 pieces per month by the end of 2025, and 90000 pieces per month by the end of 2026.   The report further points out that the market expansion of HBM3e and HBM4 is the biggest variable. After 2025, driven by Blackwell (NVDA) and TPU (AVGO), the demand for high-end HBM3e and above is expected to increase. According to the report,. The demand for Blackwell series in 2025 is 5.3 million units, and the demand for TPU v6 is 2.2 million units. Ultimately, the main reason for the increase in demand is capacity growth: 1) The DRAM capacity in Blackwell has increased by 2 times (H200 to B300) to 2.4 times (H100 to B200), 2) The DRAM capacity in TPU v6 has doubled compared to v5p. With the arrival of HBM4, ASIC customization will also stimulate the demand for HBM. According to industry insiders, Samsung Electronics, SK Hynix, and Micron are expanding their supply of HBM products to ASIC design companies. Last month, Micron announced at its earnings conference that in addition to Nvidia and AMD, ASIC platform companies are also the four major customers for HBM's bulk shipments. According to Gao Yongmin, a researcher at DAOL Investment Securities, "This reflects the confidence brought by the growth in ASIC customer demand. ” With the surge in demand for AI model specific custom semiconductors operated by companies such as Amazon, Meta, and Google, the ASIC market has also experienced rapid growth. This is because general AI semiconductor products produced by companies such as Nvidia and AMD are expensive, and their performance to power ratio is not sufficient to run AI models. The industry expects ASIC shipments to exceed Nvidia's AI semiconductor supply next year. JPMorgan predicts that the global AI ASIC market will reach approximately $30 billion (about KRW 41 trillion) this year, with an annual growth rate of over 30%. With the rapid development of ASIC companies, memory semiconductor companies producing HBM are also expanding their supply. It is reported that SK Hynix, the market leader in HBM, is supplying HBM in bulk to companies such as Amazon, Google, and Broadcom's ASIC chips. According to reports, Samsung Electronics is also supplying the fifth generation HBM (HBM3E) to companies such as Broadcom. An industry insider pointed out that the supply of ASIC still accounts for about 10% of the entire HBM market, but in fact, the supply that used to be mainly concentrated in Nvidia and AMD is rapidly diversifying. ” LS Securities researcher Cui Yonghao said, "Starting from next year, with the continuous growth of ASIC's market share, HBM customers will present a diversified prospect. ” So, in your opinion, what would be the trend of HBM?
    - July 19, 2025
  • 18A + N2! Intel 2nm Chip Tape Out!
    18A + N2! Intel 2nm Chip Tape Out!
    Intel is simultaneously using its own 18A and TSMC N2 processes to address the delivery and production capacity issues of 18A. On July 14th, it was reported that Intel had completed the Nova Lake processor computing module chip fabrication at TSMC's 2nm process node N2 a few weeks ago, and the next step will be to power it up and run it. If everything goes smoothly, it can lay the foundation for mass production in 2026. Due to the urgent demand for advanced processes in CPU cores, a 2nm process can bring about a 15% performance improvement and a 30% power optimization. Therefore, this chip fabrication is likely to be its computing module, which can fundamentally improve the overall performance of the processor. Intel plans to continue its dual source foundry strategy and place over $14 billion in 2nm orders with TSMC, covering production capacity from 2024-2025. Computing modules may use both its own 18A and TSMC N2 processes to address issues with 18A delivery or production capacity. The compatibility of 18A-P/14A with the time plan is of concern, and there is uncertainty as to whether it will be ready for mass production before the 2025 production milestone. TSMC N2 plans to start production in the middle and later stages of 2025, but the timeline is tight. Nova Lake-S is expected to be delivered in the second half of 2026, with a high probability of release in the third quarter of 2026. It integrates up to 52 cores, paired with an 8800MT/s memory controller and related graphics and media processing modules, making it difficult to manufacture. At present, the chip is undergoing power on testing, and further mass production and shipment will take time. Its market performance needs to be tested.  
    - July 16, 2025
  • MCU, a big change
    MCU, a big change
    Introduction   In 2025, within only half a year, leading MCU manufacturers such as ST, NXP, and Renesas almost simultaneously released automotive MCU products equipped with new embedded storage (such as PCM and MRAM), breaking the long-standing technical pattern of MCU dominated by embedded Flash. Although it is still too early to talk about "standard configuration", it is certain that new storage has leapt from "attempt" to "strategic layout" and has begun to have a profound impact on the MCU ecosystem.   In the past, MCU was a "small and beautiful" device used for basic control logic. But in recent years, it is evolving towards "small and strong": the process has moved from traditional 40nm to 22nm, 16nm and even more advanced nodes; integrated AI acceleration, security unit, wireless module... Become a candidate for the "car brain" and "edge computing center". Behind this, a long-ignored but crucial technology is "making up for the shortcomings": the revolution of embedded storage technology (eNVM). Under the trend of "software-defined cars", OEMs and Tier1 manufacturers face unprecedented challenges: ECU complexity surges, functions are highly concentrated; OTA updates, AI reasoning, model loading, software "gets thicker and thicker"; storage space and read-write performance have become bottlenecks in the entire vehicle architecture. Traditional Flash has long been unable to keep up with the density, speed, power consumption and durability. In this context, new memories (PCM, MRAM) have become the key weapon for the evolution of MCUs.   ST chooses phase change memory (PCM)   Phase change memory (PCM) is an emerging non-volatile storage technology. Its basic principle is to store information through the phase change of materials (from amorphous to crystalline). The basic mechanism of PCM was invented by Robert Ovshinsky of Stanford University in the 1960s. STMicroelectronics has a patent license for this original development, and ST is the first manufacturer to truly implement PCM in automotive-grade MCUs. ST also introduced the working principle of PCM on its official website. PCM is made of germanium antimony tellurium (GST) alloy. During the manufacturing process, it uses the physical property of the material that can undergo rapid thermal control changes between amorphous and crystalline states. The above states correspond to logic 0 and logic 1, respectively, and can be electrically distinguished by the high resistance of the amorphous state (logic 0) and the low resistance of the crystalline state (logic 1). PCM supports read and write operations at low voltages and has several substantial advantages over Flash and other embedded memory technologies. Working principle of PCM (Source: ST) After years of research and development, in April 2025, ST launched Stellar with xMemory, a new generation of scalable memory embedded in its Stellar series of automotive microcontrollers. The core of Stellar xMemory is STMicroelectronics' proprietary phase change memory (PCM) technology. STMicroelectronics claims that it has the industry's smallest qualified storage bit unit, which can completely change the challenging process of developing software-defined vehicles (SDVs) and evolving electrification platforms. It is reported that ST's Stellar P and G series automotive MCUs will be equipped with the latest generation of PCM technology using xMemory. The Stellar P and Stellar G series are Stellar Integration MCUs suitable for centralized regional controllers, domain controllers and body applications. The first to be launched will be the Stellar P6 MCU, which is designed to meet the needs of new powertrain trends and architectures for electric vehicles (EVs) and will be put into production in the second half of 2025. Stellar with xMemory technology does not need to manage multiple devices with different memory options, nor does it need to bear the related development and certification costs. It only needs one innovative device with scalable memory to provide customers with efficient and economical solutions. This simplified approach from the beginning enables automakers to design for the future and leave more room for innovation later in the development cycle, thereby reducing development costs and accelerating time to market through a more streamlined supply chain.     Cross-section of an embedded PCM bit cell in FD-SOI technology, showing the heating device that quickly flips the memory cell between crystalline and amorphous states. ST points out that choosing the right MCU early in the SDV life cycle ensures that there is sufficient on-chip memory for future software development. Today, choosing too high a memory specification increases costs, while choosing too low a memory specification may require the subsequent search and re-qualification of other MCUs with additional memory, adding complexity, cost and delay. Stellar MCUs with xMemory are highly competitively priced, providing more cost savings, simplifying the OEM supply chain, and shortening certification time by extending product lifecycles and maximizing reuse between projects, thereby accelerating time to market.   NXP and Renesas, Embracing MRAM   Magnetoresistive RAM (MRAM) is another type of non-volatile storage "black technology". MRAM uses the physical properties of magnetic materials to achieve data storage, with ultra-high write speeds, low power consumption and extremely strong durability. MRAM has been widely adopted by companies such as NXP and Renesas. NXP is an early automotive MCU manufacturer to launch MRAM MCU. In March this year, NXP Semiconductors announced the launch of its S32K5 series of automotive MCUs, the industry's first MCU based on 16nm FinFET process with built-in MRAM, marking an important milestone in its development. The S32K5 series is designed to expand the NXP CoreRide platform, providing pre-integrated regional and electrification system solutions to support the evolution of scalable software-defined vehicle (SDV) architectures. Automakers are increasingly adopting partitioned architectures, each with its own unique approach to integrating and distributing the functions of electronic control units (ECUs). At the heart of these solutions is an advanced MCU architecture that combines real-time performance with low latency, deterministic communications and innovative isolation capabilities. The addition of high-performance MRAM significantly speeds up ECU programming, both at factory settings and during over-the-air (OTA) updates. MRAM writes more than 15 times faster than traditional embedded flash memory, enhancing the flexibility of automakers to deploy new software features throughout the life cycle of the vehicle. In July 2025, Renesas also released an MCU with built-in MRAM, but compared with NXP, the process is 22nm. The device is equipped with 1MB MRAM and 2MB SRAM. It is said that the use of MRAM is a major feature of the second-generation RA8 series. In addition to high durability and data retention, MRAM also has advantages such as high-speed reading and writing, no need to erase, and low power consumption. Renesas Electronics released MRAM high-speed reading and writing technology for high-performance microcontrollers at the International Semiconductor Integrated Circuit Conference (ISSCC 2024), and RA8P1 uses this technology. For applications that require larger memory capacity, the device is equipped with an eight-way SPI interface and a 32-bit external bus interface that supports XIP/DOTF. In addition, system-level package (SiP) products with integrated 4MB or 8MB external flash memory are also available. In terms of peripheral functions, it supports parallel camera input, MIPI-CSI2, serial audio input, and multimodal AI voice input through PDM. In addition, it is also equipped with a 16-bit AD converter, graphic HMI function, and various serial interfaces.   TSMC: MRAM and RRAM go hand in hand   As the world's leading foundry, TSMC has bet on two major technologies for new storage technologies: MRAM and RRAM.   At the 2025 Technology Seminar, Dr. Yujie Mi, Executive Vice President and Co-Chief Operating Officer of TSMC, pointed out: "eFlash technology has encountered expansion bottlenecks at the 28nm process node, and the new generation of NVM (non-volatile memory) must replace its role in more advanced processes."   As a result, TSMC clearly proposed to introduce two embedded storage technologies, RRAM and MRAM, into 22nm, 16nm, and 12nm, respectively, and further advance to 6nm and 5nm nodes.   TSMC is one of the few manufacturers that has achieved large-scale mass production of RRAM. At present, TSMC has achieved mass production of RRAM on 40nm, 28nm and 22nm processes, and has passed automotive-grade certification. 12nm RRAM has also entered the customer tape-out stage, and the 6nm version is underway. Infineon's new generation of AURIX MCU uses TSMC's eRRAM technology, which has become an important embedded storage solution for its automotive platform. The advantages of RRAM are: low process complexity, can be directly deployed in the back-end metal layer (BEOL); fully compatible with logic process, adaptable to multiple types of MCU architectures; especially suitable for power-sensitive and cost-intensive consumer and automotive applications. In contrast, although MRAM has a more complex process, it has superior performance characteristics: the write speed is more than ten times that of Flash; non-volatile storage + extremely strong durability; suitable for scenarios that require high-speed writing, frequent OTA updates, AI reasoning and other complex tasks. For in-vehicle computing platforms (such as ADAS, AI SoC, etc.) that pursue computing power density, data throughput and real-time performance, MRAM may be the most ideal storage replacement after eFlash. TSMC has currently achieved mass production of MRAM at the 22nm process node, 16nm MRAM has entered the customer preparation stage, and 12nm is under development. A more radical roadmap also includes future expansion to 5nm nodes. In May 2025, TSMC announced that it would set up its first European Design Center (EUDC) in Munich, Germany, focusing on R&D and customer support for MRAM storage technology for automotive applications. This center will become TSMC's tenth design center in the world and is scheduled to be officially opened in the third quarter of 2025. Its service areas include automotive, industrial, AI, telecommunications and the Internet of Things. This also means that TSMC not only promotes the popularization of new storage on the process platform, but also deepens the vehicle development ecosystem in its global layout. In addition to horizontally advancing process nodes, TSMC is also seeking technological breakthroughs in the following directions: 3D RRAM MCU: Promote embedded storage stacking packaging to free up more on-chip space; SOT MRAM (spin-orbit torque): Compared with traditional STT-MRAM, it has lower power consumption and faster writing speed, and is expected to enter large-scale mass production; Silicon photonics platform: Combine optical interconnection and storage interface, and layout for data centers and edge computing. The implementation of these technologies will further consolidate TSMC's leading position in specialty processes and embedded storage ecosystems. Storage and computing integration trend   Whether it is PCM, MRAM or RRAM, they are not only memory substitutes, but also catalysts for MCU architecture changes. New storage technologies such as PCM, MRAM and RRAM represent a deeper trend of "storage and computing integration", which is not just a simple storage medium replacement issue, but a coordinated evolution between storage architecture and computing architecture. In the field of MCU, the boundaries between storage and computing are becoming increasingly blurred. In traditional MCUs, storage and computing are separate modules. Computation is performed through the central processing unit (CPU) or dedicated accelerators, while storage is performed through external or internal flash memory, SRAM and other devices for data storage and management. However, with the complexity of computing tasks, especially the growing application demand for machine learning, AI reasoning and edge computing, the separation of storage and computing is becoming increasingly unsuitable. The addition of new memories such as MRAM and PCM provides a new opportunity for "storage and computing integration". In particular, PCM, through its phase change characteristics, not only has non-volatile storage functions, but also can play the role of "near computing" in some applications, reducing the bottleneck of data transmission and further accelerating the data processing process. MRAM's high-speed read and write characteristics also enable it to work with computing modules to improve processing efficiency in scenarios such as AI edge reasoning and real-time data processing. In today's AI edge, OTA fragmentation, and software agility, the "intelligence" of MCUs is increasingly dependent on memory capabilities. It is expected that future MCU architectures will increasingly combine storage and computing to create more efficient, flexible, and intelligent systems.   Conclusion   In the past decade, we have been accustomed to viewing MCUs as representatives of "control" systems, and their embedded storage is just a supporting component; but in the era of AI, SDV, and edge intelligence, storage is moving from behind the scenes to the front, becoming an integral core of computing architecture. This is not only a replacement of materials and an evolution of processes, but also a key step for MCUs to move from "usable" to "scalable" and "evolvable". In this wave of microcontroller upgrades triggered by embedded storage, we see not only the differentiation of the routes of leading manufacturers, but also the accelerated adaptation and evolution of the entire industrial chain - from foundry to tool chain, from automobiles to industrial applications. This transformation has just begun.
    - July 13, 2025
  • HBM, A New War
    HBM, A New War
      Entering the "post AI" era, HBM is no longer just a standard component for high-performance AI chips such as GPUs and TPUs, but has evolved into a strategic high ground for fierce competition among semiconductor giants. Whether it's Samsung, SK Hynix, or Micron, these leading companies in the storage field all unanimously see HBM as a key engine for future revenue growth. They seem to have reached a consensus that in order to dominate the storage market, they must first master the core technology of HBM. So, in this competition without gunpowder, what technologies are worth paying attention to? Let's delve into the analysis together.   Is customization the only way out?   Customization may be one of the ultimate destinations of HBM. In fact, more than two years ago, when HBM was first emerging, Hynix and Samsung discussed the trend of customization. With cloud giants customizing their own AI chips, the demand for HBM has only increased and not decreased, making customization one of the inevitable needs. In August last year, SK Hynix Vice President Yoo Sung soo stated, "All M7 (Magnificent 7) refers to the seven major tech stocks in the S&P 500 index: Apple, Microsoft, Google Alphabet, Amazon, Nvidia, Meta, and Tesla. )Companies have come to us requesting customized HBM (High Bandwidth Memory). ” In June of this year, South Korean media reported that SK Hynix had simultaneously targeted companies such as Nvidia, Microsoft (MS), Broadcom, which are expected to become "heavyweight customers" in the customized HBM market. It has recently reached agreements with Nvidia, Microsoft, and Broadcom to supply customized HBM and has begun design work based on the needs of each company. It is reported that SK Hynix prioritizes the supply plan of its largest customer NVIDIA and determines the list of other customers. Industry insiders have stated that "considering SK Hynix's production capacity and the launch schedule of AI services from major technology companies, it is not possible to meet the needs of all M7 customers at once," but also pointed out that "considering the changes in the HBM market situation, there may be several new customers added in the future. SK Hynix also announced in April this year that it will shift towards customization starting from the seventh generation HBM (HBM4E) and has partnered with TSMC. We plan to adopt TSMC's advanced logic technology on the HBM4 basic die, and it is expected that the first batch of customized HBM products will be launched in the second half of next year, It is worth mentioning that due to SK Hynix's successful acquisition of multiple heavyweight clients, its likelihood of maintaining its dominant position in the next-generation customized HBM market has greatly increased. According to TrendForce data, SK Hynix currently holds a market share of approximately 50% in the HBM market, far surpassing Samsung Electronics (30%) and Micron (20%). If we only look at the latest HBM3E product, SK Hynix's market share is as high as 70%. On the other hand, Samsung Electronics has also been exposed to be in discussions with multiple customers regarding the supply of customized HBM. Given its recent success in supplying HBM3E to AMD, the world's second-largest AI chip manufacturer, the industry expects it to soon acquire customers for HBM4 and custom HBM as well. It is reported that Samsung is currently in specific negotiations with customers such as Broadcom and AMD regarding the HBM4 product. Compared to the two Korean manufacturers, Micron, located far away in the United States, appears much slower. In June of this year, Raj Narasimhan, Senior Vice President and General Manager of Micron Cloud's Memory Business Unit, stated that the production plan for HBM4 will be closely integrated with the readiness of customers' next-generation AI platforms to ensure seamless integration and timely expansion of production to meet market demand. It stated that in addition to providing the latest HBM4 to mainstream customers, customers are also seeking customized versions, and the development of the next generation HBM4E is also underway. Collaborating with specific clients to develop customized HBM solutions will further enhance the value of memory products. At this point, many people may want to ask, what are the benefits of customizing HBM, and why are DRAM manufacturers and cloud giants flocking to it? Firstly, it needs to be clarified that the key to customizing HBM (cHBM) lies in integrating the functionality of the base die into the logic die designed by the SoC team. This includes controlling I/O interfaces, managing DRAM stacks, and carrying direct access (DA) ports for diagnosis and maintenance. This integration process requires close collaboration with DRAM manufacturers, but it gives SoC designers greater flexibility and stronger control over access to the HBM core chip stack. Designers can integrate memory and processor chips more tightly and optimize between power consumption, performance, and area (PPA) based on specific applications. SoC designers can freely configure and instantiated their own HBM memory controllers, and directly interact with the HBM DRAM stack through DFI2STSV bridging. Logic chips can also integrate enhanced features such as programmable high-quality built-in self-test (BIST) controllers, chip to chip adapters (D2D adapters), and high-speed interfaces (such as the universal chip to chip interconnect standard UCIe), enabling communication with processor chips in a complete 3D stack. Due to the fact that the chip is manufactured using logic processes rather than DRAM processes, existing designs can be reused. One important advantage of customizing HBM is to significantly reduce the delay introduced by the intermediary in the data path, thereby reducing related power consumption and performance losses. It effectively shortens the distance between memory and processor chips by reusing existing high-speed bare chip interconnects (such as UCIe). This flexibility can be applied to various scenarios, such as cloud service providers using edge AI applications with extremely high cost and power requirements, as well as systems pursuing maximum capacity and throughput for complex AI/machine learning computing scenarios. However, customized HBM currently faces some challenges, as its entire concept is still emerging and the technology is in the early stages of development. Like all innovations, the road ahead is inevitably accompanied by challenges. Integrating basic chip functions into logic chips means that end users need to consider the entire lifecycle from the perspective of chip lifecycle management (SLM) - from design, trial production, mass production, to on-site applications. For example, after wafer level HBM chip stacking, the responsibility for screening DRAM cell defects will fall on end users. This raises some questions, such as how should users handle specific DRAM algorithms recommended by suppliers? And can users conduct comprehensive on-site testing and diagnosis of HBM during planned downtime? At present, to successfully deploy customized HBM, a complete ecosystem is needed, which brings together IP providers, DRAM manufacturers, SoC designers, and ATE (Automated Test Equipment) companies. For example, due to the large number and high density of interconnections, traditional ATE can no longer be used for customized HBM testing. In summary, customized HBM has become a major trend, and regardless of whether manufacturers like it or not, it will occupy a significant position in the HBM4 standard.   The technical challenge of mixed bonding that cannot be bypassed?   In addition to customization, hybrid bonding is also one of the important development directions for HBM in the future. At present, with the continuous increase of stacking layers, traditional welding techniques are facing significant challenges. The flux currently used can remove metal surface oxides and promote solder flow, but its residues can cause problems such as increased stack gaps and thermal stress concentration, especially in precision packaging fields such as high bandwidth memory (HBM), where this contradiction is more prominent. And even Samsung, SK Hynix, and Micron are considering using hybrid bonding technology in the next generation HBM. Let's first understand the current bonding technology of HBM chips. In traditional flip chip bonding, the chip is "flipped" so that its solder bumps (also known as C4 bumps) align with the bonding pads on the semiconductor substrate. The entire component is placed in a reflow oven and uniformly heated to around 200 º C-250 º C according to the solder material. The solder bump melts, forming electrical interconnection between the joint and the substrate. With the increase of interconnect density and the reduction of spacing to below 50 µ m, the flip chip process faces some challenges. Due to the entire chip package being placed in an oven, the chip and substrate will expand at different rates (i.e., different coefficients of thermal expansion, CTE) due to heat, resulting in deformation and interconnect failure. Then, the molten solder will spread beyond its designated area. This phenomenon is called solder bridging, which can cause unnecessary electrical connections between adjacent pads and may result in short circuits, leading to chip defects. This is where the TCB (Thermal Compression Bonding) process comes into play, as it can solve the problem of flip chip technology when the spacing is reduced below a certain point. The advantage of TCB is that heat is locally applied to the interconnect points through the heating tool head, rather than uniformly applied in the reflow soldering furnace (flip chip). This can reduce the heat transfer to the substrate, thereby reducing thermal stress and CTE challenges, and achieving stronger interconnections. Apply pressure to the chip to improve bonding quality and achieve better interconnection. The typical process temperature range is between 150 º C-300 º C, and the pressure level is between 10-200MPa. TCB allows for a higher contact density than flip chip, reaching up to 10000 contact points per square millimeter in some cases, but the main drawback of higher precision is lower throughput. Although the flip chip machine can achieve a throughput of over 10000 chips per hour, the throughput of TCB is in the range of 1000-3000 chips. The standard TCB process also requires the use of soldering flux. During the heating process, copper may oxidize and cause interconnect failures, and flux is a coating used to remove copper oxides. But when the interconnect spacing is reduced to 10 µ m or more, the flux becomes more difficult to remove and leaves sticky residue, which can cause minor deformation of the interconnect, leading to corrosion and short circuits. Fluxless bonding technology emerged as a result, but it can only further reduce the spacing size to 20 μ m, up to a maximum of 10 μ m, and can only be used as a transitional technology. When the I/O spacing is less than 10 μ m, hybrid bonding technology is required. Hybrid bonding technology achieves DRAM chip stacking through copper to copper bonding, eliminating the need for traditional bump structures. This approach not only significantly reduces chip size but also doubles energy efficiency and overall performance. According to industry insiders, as of May 7th, Samsung Electronics and SK Hynix are advancing the use of hybrid bonding technology for mass production of their next-generation HBM products. It is expected that Samsung will adopt this technology in the HBM4 (sixth generation HBM) as early as next year, while SK Hynix may be the first to introduce it in the seventh generation product HBM4E. The current fifth generation HBM - HBM3E still uses hot press bonding technology to fix and stack chips through heating, pressure, and bump connections. Samsung mainly purchases TC equipment from its subsidiary SEMES and Japan's Shinkawa Electric (SHINKAWA), while SK Hynix relies on Hanmei Semiconductor and Hanhua Semiconductor. Micron, which provides HBM to Nvidia, also purchases equipment from South Korea, the United States, and Xinchuan. With the initial opening of the hybrid bonding market, this technology is expected to trigger a major reshuffle in the semiconductor equipment field. Once successfully imported, hybrid bonding may become the mainstream process for future HBM stacking. In order to seize the opportunity, an American application materials company has acquired a 9% stake in Besi, the only company in the world with advanced production capabilities for hybrid bonding equipment, and has taken the lead in introducing its hybrid bonding equipment into the system level semiconductor market, seizing the application opportunity. At the same time, Hanmei Semiconductor and Hanhua Semiconductor are also accelerating the development of next-generation chip stacking equipment. These two Korean manufacturers are not only rapidly advancing the research and development of hybrid bonding equipment, but also actively developing solder bonding equipment to enhance market competitiveness. If customized HBM is a struggle between DRAM manufacturers and cloud giants, then hybrid bonding is a game between DRAM manufacturers and bonding device manufacturers. With HBM officially entering the HBM4 era in the second half of this year, the attention to hybrid bonding may further increase.   What other new technologies are there?   It is worth mentioning that in June of this year, the Korean Academy of Sciences and Technology (KAIST), a national research institution in South Korea, released a 371 page research paper systematically depicting the evolution path of HBM technology from HBM4 to HBM8. The content covers improvements in bandwidth, capacity, I/O interface width, thermal design, as well as packaging methods, 3D stacking structures, memory center architectures for embedded NAND storage, and even machine learning based power control methods. It is worth emphasizing that this document is not a product roadmap released by a commercial company, but an academic prediction of the potential evolution of future HBM technology based on current industry trends and scientific research progress. However, it is also enough to give us a glimpse into the possible development direction of HBM in the future.     Let's first take a look at the technical features of each generation of products from HBM4 to HBM8: HBM4: Pioneer of Customized Design. As the beginning of the new generation of HBM technology, HBM4's biggest innovation lies in customized basic die design. By integrating NMC (Near Memory Computing) processors and LPDDR controllers, HBM4 enables direct access to HBM and LPDDR without the need for CPU intervention. This design significantly reduces data transmission latency and improves overall system efficiency. HBM4 supports multiple flexible data transfer modes, including direct read and write between GPU and HBM, data migration between HBM and LPDDR, and GPU indirect access to LPDDR through HBM. The introduction of dual command execution capability further enhances the efficiency of multitasking and provides strong support for complex AI workloads. HBM5: Breakthrough in 3D Near Memory Computing HBM5 pushes 3D Near Memory Computing technology to new heights. By integrating NMC processor die and cache die, and using dedicated TSV interconnects and power networks, HBM5 achieves a highly energy-efficient computing architecture. The introduction of distributed power sources/grounding and thermal TSV arrays effectively reduces IR voltage drop and improves heat dissipation efficiency. Of particular note is the introduction of AI design agent optimization technology in HBM5, which utilizes intelligent algorithms to optimize TSV layout and decoupling capacitor placement, significantly reducing power supply noise induced jitter (PSIJ). This innovation not only enhances system stability, but also lays the foundation for the intelligent design of subsequent products. HBM6: Innovation in Multi Tower Architecture The biggest highlight of HBM6 is the introduction of the Quad Tower architecture. Four DRAM stacks share a basic die, achieving an astonishing bandwidth of 8 TB/s through 8096 I/O channels. This architecture design not only improves bandwidth performance, but also enhances cost-effectiveness through resource sharing. The integration of L3 cache is another important innovation of HBM6. By reducing the need for direct access to HBM, L3 caching significantly improves the inference performance of LLM. Actual test data shows that the L3 cache embedding of HBM6 reduces HBM access by 73% and latency by 87.3%. The introduction of a crossover switch network enables HBM cluster interconnection, optimizing the high throughput and low latency LLM inference performance. HBM7: Hybrid Storage Ecosystem HBM7 has built a complete hybrid storage ecosystem. By integrating high bandwidth flash memory (HBF), a HBM-HBF storage network is formed with a total capacity of 17.6 TB, which can meet the storage needs of large-scale AI inference. The combination with 3D stacked LPDDR further expands the storage hierarchy, achieving an interconnect bandwidth of 4096 GB/s on the glass intermediate layer. The comprehensive application of embedded cooling structure is an important feature of HBM7. Efficient heat transfer from the chip to the cooling fluid has been achieved through thermal transmission lines and fluid TSV technology. The introduction of LLM assisted interactive reinforcement learning (IRL) technology makes decoupling capacitor placement and PSIJ optimization more intelligent and precise. HBM8: In the era of full 3D integration, HBM8 represents the pinnacle of HBM technology, achieving true full 3D integration and HBM center computing. The double-sided intermediate layer design supports various 3D extension architectures such as GPU-HBM-HBM, GPU-HBM-HBF, and GPU-HBM-LPDDR, providing flexible configuration options for different application scenarios. The fully 3D GPU-HBM integrated architecture is the core innovation of HBM8, with the GPU located at the top of the storage stack, which not only facilitates heat dissipation but also achieves seamless integration of storage and computing. The comprehensive application of AI design agents makes 3D layout and routing optimization more intelligent, considering the collaborative optimization of thermal signal integrity. From the overall development trend, the evolution of HBM technology shows a significant leap in magnitude. In terms of bandwidth, there has been an astonishing 32 fold increase from HBM4's 2.0 TB/s to HBM8's 64 TB/s. This breakthrough is mainly achieved through two dimensions: first, a significant increase in the number of I/Os, from 2048 to 16384; The second is the steady increase in data rate, from 8 Gbps to 32 Gbps. In terms of capacity expansion, the single module capacity has been increased from 48 GB for HBM4 to 240 GB for HBM8, achieved through an increase in stacking layers and single die capacity. At the same time, the power consumption gradually increased from 75W to 180W. Although the power consumption has increased, considering the significant improvement in performance, the overall energy efficiency ratio still shows significant improvement.   Key technological innovation path   Another significant feature of the evolution of HBM technology is the continuous breakthrough of 3D integration technology. Starting from HBM4, the technological roadmap gradually transitioned from traditional micro bump bonding to non bump Cu Cu direct bonding technology. This transformation not only significantly reduces contact resistance, but also greatly increases interconnect density, laying the foundation for subsequent high-density 3D stacking. TSV (Through Silicon Via) technology, as the core of 3D integration, enables efficient electrical connections between vertically stacked bare chips. By shortening the interconnect length, TSV technology effectively reduces RC latency and power consumption, providing hardware support for high bandwidth data transmission. At the HBM8 stage, the introduction of coaxial TSV technology further enhances signal integrity and supports high-speed data transmission at 32 Gbps. The development of intermediary technology is also remarkable. From a single silicon intermediate layer to a silicon glass hybrid intermediate layer, this innovation breaks through the size limitation of pure silicon intermediate layers while maintaining excellent signal integrity. The hybrid intermediate layer technology combines the high bandwidth characteristics of silicon intermediate layers with the large-scale scalability of glass intermediate layers, providing technical support for complex multi tower architectures. It is worth noting that with the continuous improvement of HBM performance, heat dissipation has become a key bottleneck restricting technological development. The HBM technology roadmap presents a clear evolution path for cooling technology, gradually upgrading from traditional air cooling to more advanced cooling solutions. HBM4 adopts direct cooling liquid cooling (D2C) technology, which directly cools the chip with liquid, and has higher heat dissipation efficiency compared to traditional air cooling. At the HBM5 and HBM6 stages, immersion cooling technology became mainstream, immersing the entire module in insulating coolant to achieve more uniform and efficient heat dissipation. The most advanced is the embedded cooling technology used in HBM7 and HBM8, which achieves chip level precision cooling through fluid TSV (F-TSV) and microchannel structure. This technology transfers heat directly from the HBM die to the cooling fluid through a heat transfer line (TTL), achieving unprecedented heat dissipation efficiency. Of course, the evolution of HBM technology has brought significant performance improvements. In terms of LLM inference, the four tower architecture of HBM6 has increased the inference throughput of the LLaMA3-70B model by 126%. In terms of energy efficiency, the NMC architecture of HBM7 reduces data movement, resulting in a power consumption reduction of over 30% for GEMM workloads. The improvement of system level scalability is also remarkable. The full 3D architecture of HBM8 supports multiple GPU-HBM clusters, with a total bandwidth of up to 1024 TB/s, providing powerful storage support for Exascale computing. These performance improvements not only meet the current needs of AI applications, but also lay the technological foundation for future artificial general intelligence (AGI).   Write at the end   From customized HBM to hybrid bonding, from next-generation intermediaries to converged storage architectures, HBM technology is accelerating its evolution, with an increasingly rapid pace of iteration. But in this highly complex technology competition, only players with a system level perspective and the ability to deeply integrate multidimensional processes and ecological resources have the opportunity to stand out. With SK Hynix handing over basic die foundry to TSMC, DRAM manufacturers' dominant ability in the HBM manufacturing process has gradually weakened. This technological system is no longer a task that a single vendor can accomplish alone, but a new battlefield that requires multi-party collaboration and cross-border integration. The answer to whether SK Hynix, Samsung, or Micron will have the upper hand in the future is still unknown. But what can be certain is that in the post AI era, the competition of HBM has just begun and will only intensify.
    - July 12, 2025
  • NXP Li Xiaohe: Further strengthen China layout, "China definition" and industrial synergy
    NXP Li Xiaohe: Further strengthen China layout, "China definition" and industrial synergy
    Driven by the digital wave, China has continuously accelerated the deep integration of industrialization and digitization in recent years. The latest Action Plan for the Construction of Digital China 2025 issued by the National Data Bureau clearly states that by the end of 2025, the added value of the core industries of the digital economy will account for more than 10% of GDP, and the computing power scale will exceed 300EFLOPS. Artificial intelligence, industrial Internet, intelligent connected vehicles and other fields will become the core engine of semiconductor demand explosion. This strategic deployment not only injects policy dividends into the local semiconductor industry, but also attracts global leading enterprises to accelerate their layout. Recently, the "NXP 2025 Automotive Leadership Media Open Day" was held in Dalian. The reporter interviewed Li Xiaohe, Executive Vice President and General Manager of the China Business Unit of NXP Semiconductors, and conducted in-depth discussions on NXP's China strategy, local layout, and technological development trends.   picture Strengthening investment in the Chinese market is an inevitable choice   With the continuous promotion of industrialization and digitization, the development speed of China's technology industry has been very fast in recent years, which has effectively driven the high-speed development of the semiconductor field. According to WSTS data, the global semiconductor market is expected to reach $700.9 billion by 2025, a year-on-year increase of 11.2%, with China accounting for approximately 30%, making it one of the largest semiconductor consumer markets in the world. Customs import and export data also shows that from January to May 2025, China imported 231.5 billion pieces of integrated circuits, an increase of 8.4%, or approximately 156.8 billion US dollars; The export volume was 135.9 billion yuan, an increase of 19.5%, about 73.26 billion US dollars.   The huge market demand and constantly evolving innovation momentum have led more and more global semiconductor companies to attach great importance to China. Li Xiaohe emphasized the importance of the Chinese market in his speech at the "NXP 2025 Automotive Leadership Media Open Day". The Chinese market not only accounts for one-third of NXP's sales. More importantly, in recent years, we have seen many representatives of new quality productivity in China. For example, China now accounts for 70% of the world's electric vehicle production and sales, 76% of battery production comes from China, and 56% of the world's largest robotics company comes from China. This has an important promoting effect on the company's long-term development   Li Xiaohe believes that different companies have different development strategies due to their different "DNA", starting points, and core businesses. Based on the characteristics of NXP, over 50% of its business comes from automobiles. The automotive processing industry accounts for about 75% of total sales. The Chinese market accounts for 35% of the company's total sales. This means that automotive and industrial are the two most important industry markets for NXP, while China is the most important regional market.   In addition, since integrating Freescale in 2016, NXP has gradually developed into a strong system oriented enterprise, one of the few in the world that masters microprocessors, sensors, connection chips, analog chips, and security chips, and can organically combine functional safety and system safety.   Over the years, NXP has continuously strengthened its system capabilities in the hope of better empowering enterprises in the automotive, industrial, and other fields. China is not only a major global automotive and industrial market, but also the most important regional market for NXP, coupled with the characteristics of NXP's strong systems company. It is almost inevitable for NXP to strengthen its investment in the Chinese market.   Defining and designing products in China   In fact, NXP has been strengthening its investment in the Chinese market over the years. Data shows that NXP has been deeply involved in China for 39 years. So far, NXP has 6000 employees in China, including 1600 engineers. At the same time, NXP has built 6 research and development centers, 14 office locations, and the world's largest backend assembly and testing factory in China.   More noteworthy is that on January 1st of this year, NXP established its China business unit. This can be seen as an important step for NXP to strengthen its domestic strategy in China. According to Li Xiaohe's introduction, the China Business Unit is not only a sales entity, but also integrates capabilities such as sales, research and development, operations, quality, and technical support. This allows NXP to provide Chinese users with more competitive products, faster innovation cycles, better product optimization, and better research and development efficiency, which can better implement the concept of "in China, for China".   On this basis, Li Xiaohe further proposed the concept of "in China, for the world". The establishment of the China Business Unit also has a great promoting effect on this. Because with the innovative development of Chinese enterprises in the fields of automobiles, industry, etc., their competitiveness continues to strengthen. Now, China's automobile, industrial and other markets have led the global development, representing the strongest competitiveness in the world. We believe that success in China can also become success in the world   In cooperation with the construction of the China business unit, NXP has established product management, product definition and other teams in addition to technical support and production teams. Many critical products will be defined and designed in China. For example, the latest generation of battery management products released by NXP for the global market is defined in China. When the technical team of NXP China Business Unit is developing a project, they can combine research and development resources from both China and overseas. Overseas R&D engineers will also be integrated into the same project team to work with Chinese engineers on product development. The related project products are managed and defined in China, and China is the first to conduct rapid verification with customers.   It is reported that through this approach, NXP has launched several products, including the latest 18 channel lithium battery cell controller BMx7318/7518 series IC products. The related products are not only shipped in the Chinese market, but also widely accepted in overseas markets.   New trends provide new growth   As the automotive industry continues to promote the transformation towards intelligence and electrification, semiconductor technology has become the foundation and core driving this change. The global automotive semiconductor market is expected to exceed 65 billion US dollars by 2025, and the Chinese market will occupy nearly 30% of the global market share with a scale of 250 billion yuan, with an average annual compound growth rate of 11.6%. Behind this growth is the demand for technological iteration, where the penetration rate of new energy vehicles exceeds 50% and the penetration rate of L2+level intelligent driving systems exceeds 60%. As a global leader in the automotive semiconductor industry, NXP Semiconductors' initiatives have also received widespread attention from the industry.   However, NXP's gaze is not limited to the automotive aspect. Li Xiaohe pointed out, "We believe that in the next few years, China will see parallel and aggregated development of automobiles, humanoid robots, and low altitude economy. This is because the underlying technologies of these industries, such as functional safety, information security, and production and manufacturing supply chains, have many complementary aspects. The development speed of the robotics and low altitude economy fields will be faster than that of normal initial industries because they can be empowered by electric vehicles   The automotive industry and the industrial sector are also complementary. The degree of automation and intelligence in industrial manufacturing is increasing, and AI technology is being applied more and more. End side AI has penetrated into the industrial field, and many solutions are interconnected with the automotive industry, such as low power consumption, real-time performance, safety verification, functional safety, reliability, etc.   The related technologies will also extend into fields such as healthcare and smart homes. The demand for low-power technology in smart wearable devices is very high, and smart homes have a large demand for secure interconnection. In the future, more and more devices such as automobiles, home appliances, mobile phones, and smart wearables will be interconnected, forming a new ecosystem. Cars will surpass people's living rooms and offices, becoming the most technology intensive area with fixed time, fixed location. Automobiles, based on technologies such as new energy and high-performance computing, will integrate more scenarios such as health diagnosis and human-computer interaction, becoming people's second office, second smart home, and even recuperation places. And the underlying technologies for all of this are low power consumption, functional safety, system safety, real-time performance, and other technologies. There will be a lot of untapped space in the future.   Meanwhile, Li Xiaohe believes that in such a development process, the Chinese market will still be at the forefront. NXP will also integrate more deeply into the Chinese market, join forces with ecological partners, and jointly empower the industry.
    - July 10, 2025
1 2 3 4 5

A total of 5 pages

Need Help? Chat with us

leave a message
For any request of parts price or technical support and Free Samples, please fill in the form, Thank you!
Submit

Home

Products

whatsApp

contact