⚡ Quick News
- Apple Postpones Siri Upgrade to 2026 The tech giant has announced a delay in its AI-powered Siri enhancements, shifting the expected rollout to 2026. This postponement raises concerns about Apple's competitiveness against rivals like ChatGPT.
- Reflection AI Secures Major Funding Gaining significant investor confidence, Reflection AI has raised $130 million in its latest funding round. This boost will enhance the company's capabilities and expedite its AI-driven insights.
- NVIDIA Leads Tech Market Decline A significant stock price drop has been observed for NVIDIA, aligning with a sector-wide sell-off affecting the so-called 'Magnificent Seven'. Concerns about the economy and recession fears contribute to investor wariness.
- Chinese Hackers Exploit ChatGPT for Surveillance OpenAI has revealed evidence that Chinese hackers are utilizing ChatGPT for monitoring dissenting posts in the West. This discovery highlights the potential misuse of advanced AI technologies in geopolitical conflicts.

Google has made significant advancements in artificial intelligence with the launch of two groundbreaking products through its DeepMind division. The introduction of Gemma 3 comes with four open-weight language models ranging from 1B to 27B parameters, featuring revolutionary multimodal capabilities. Meanwhile, the Gemini 2.0 Flash is equipped with integrated image generation that bypasses traditional diffusion models, promising a streamlined AI process. The 128K token context windows on Gemma 3 models enable comprehensive analysis of extensive texts, a massive upgrade from previous limits.
Key Highlights:
Key Highlights:
- Gemma 3 offers multimodal vision-text processing and supports 140+ languages.
- Gemini 2.0 Flash provides direct image generation coupled with conversational editing functionalities.
- Early users can access the new models via Google AI Studio with easy API integration.
- The efficiency of these innovations achieves 98% accuracy using significantly fewer resources.
- Commercial potential for these technologies is already being recognized by various sectors.
If you're enjoying Nerdic Download, please forward this article to a colleague. It helps us keep this content free.

In a strategic move to reduce its dependence on NVIDIA and cut costs, Meta has commenced testing its custom AI training chip, the Meta Training and Inference Accelerator (MTIA). This innovation marks a pivotal shift toward using in-house developed semiconductors for AI-specific tasks, promising substantial efficiency improvements over Nvidia's hardware. Produced by TSMC, the MTIA is set to initially support recommendation systems, with plans to expand its application to generative AI.
Key Highlights:
Key Highlights:
- MTIA chips are built on a 7nm process, delivering significant power efficiency improvements.
- Meta aims to deploy these chips by 2026 to accommodate its expanding AI infrastructure needs.
- Collaboration with TSMC signifies a critical achievement in semiconductor innovation.
- Custom chips are projected to reduce GPU procurement costs substantially.
- The move aligns Meta with other tech giants investing in proprietary AI hardware solutions.

NVIDIA is set to unveil its next-generation B300 and GB300 GPUs at the 2025 GPU Technology Conference, capitalizing on the powerful Blackwell architecture. These GPUs are engineered to deliver revolutionary performance leaps, offering up to 30 times the AI inference speed of previous models. Such advancements are in response to increasing competition from companies like DeepSeek and Google, as well as shifts towards in-house AI solutions by major tech firms.
Key Highlights:
Key Highlights:
- Enhanced NVLink bandwidth enables seamless GPU-to-GPU communication, crucial for large data handling.
- Upgraded Transformer Engine supports faster and more precise language model training.
- Substantial memory and networking upgrades cater to hyperscale AI operations.
- Market pressures are mounting, driven by competitors employing fewer chips but achieving competitive performances.
- NVIDIA aims to reassure investors post-stock decline by demonstrating adaptability and performance improvements.

DeepSeek's latest advancements in artificial intelligence have triggered a significant increase in Chinese tech stocks, revitalizing investor optimism about China's AI industry potential. The company's R1 model is garnering attention for its cost-effective performance amid global competitions, highlighting China's resilience against U.S. export restrictions. Consequently, major Chinese tech firms like Alibaba have seen notable stock increases, reflecting a broader market rally.
Key Highlights:
Key Highlights:
- DeepSeek's R1 model demonstrates cost-efficiency, challenging Western AI giants.
- Major Chinese companies, including Alibaba, have benefited from increased investor interest.
- The Hang Seng Tech Index has surged, showcasing renewed confidence in China's tech leadership.
- China continues to push for open-source AI development to compete globally.
- Chinese server manufacturers are expanding capabilities to integrate DeepSeek’s models.
🛠️ New AI Tools
- Foxconn Debuts FoxBrain for Supply Chain Foxconn introduces FoxBrain, a reasoning model developed to enhance supply chain management and manufacturing. Initially for internal use, it's built using Nvidia's infrastructure on Taiwan's largest supercomputer.
- Harvey's AI Legal Agents Excel Harvey launches AI agents capable of performing core legal tasks with efficiency comparable to human lawyers. This innovation promises significant improvements in the legal field's workflow.
- DeepSeek's FlashMLA Enhances LLM Efficiency FlashMLA by DeepSeek optimizes large language models to function more effectively on Nvidia hardware. This release marks the commencement of their Open-Source Week initiative.
- Google's Gemini Enhances Communication Features Google's Gemini now supports video and screenshare question-and-answer sessions. This update boosts user engagement and offers a dynamic platform for visual interaction.