🧠

Generative AI: 1st April 2025

Newsletters sent once a week, unsubscribe anytime.

Published 1st April 2025

In the news

📰 AI Industry News & Analysis

Gemini hackers can deliver more potent attacks with a helping hand from… Gemini (arstechnica.com, 2025-03-28). Gemini hackers utilize 'Fun-Tuning' for enhanced prompt injection attacks, optimizing generative model interactions through discrete optimization and exploiting fine-tuning APIs for higher success rates against Google's closed-weight models

The role of developer skills in agentic coding (martinfowler.com, 2025-03-25). Exploring the integration of Generative AI, specifically Large Language Models like GitHub Copilot, in coding workflows, emphasizing prompt composition and in-line assistance to improve software delivery practices

AI #109: Google Fails Marketing Forever (thezvi.wordpress.com, 2025-03-27). Google's Gemini 2.5 Pro LLM receives overwhelmingly positive feedback, yet fails to capture public attention. Other advances include Claude's web search feature and discussions on AI ethics and practical applications in education and workplace productivity

Talk: AI Engineering at Jane Street (shekhargulati.com, 2025-03-29). Insights from Jane Street's engineering practices reveal their decision to train a custom large language model (LLM) to enhance development processes, facilitated by their AI companion application, Videocrawl

LLM Scale and Solving of Programming as a Path to Superintelligence (nextbigfuture.com, 2025-03-25). Yann LeCun highlights limitations of LLMs and CoT prompting, suggesting new AI paradigms like objective-driven AI and hierarchical planning to achieve superintelligence alongside hybrid systems integrating LLMs with advanced AI planning

The Scaling Era: An Oral History of AI, 2019–2025 (goodreads.com, 2025-03-29). An inside look at the AI revolution from 2019–2025 through interviews with prominent figures like Dario Amodei, Demis Hassabis, and Mark Zuckerberg, exploring large language models and the implications of superintelligent AIs

Import AI 406: AI-driven software explosion; robot hands are still bad; better LLMs via pdb (jack-clark.net, 2025-03-31). AI research may soon automate itself, leading to rapid advancements. Incorporating tools like 'debug-gym' improves coding capabilities, while humanlike robot dexterity remains a challenging goal due to complex manipulation tasks

Promises vs. Reality: Unraveling the Myth and Magic of Large Language Models (eliza-ng.me, 2025-03-28). Large Language Models (LLMs) have advanced significantly, showcasing capabilities in natural language processing and automation, yet they face criticism for limitations like misinformation and misunderstanding complex logic, raising concerns about expectations versus reality

🏢 Industry Practices & Economics

TAO: Using test-time compute to train efficient LLMs without labeled data (databricks.com, 2025-03-25). TAO leverages test-time compute and reinforcement learning to enhance LLM performance without labeled data, allowing enterprises to improve model quality efficiently using existing input data, surpassing traditional fine-tuning methods

Escaping POC Purgatory: Evaluation-Driven Development for AI Systems (oreilly.com, 2025-03-25). Building LLM applications can lead to 'POC Purgatory.' Evaluating continually and adopting Evaluation-Driven Development (EDD) are crucial to enhance reliability, using methods like synthetic data generation and error analysis with tools such as LlamaIndex

LLM economics: How to avoid costly pitfalls (aiacceleratorinstitute.com, 2025-03-25). Understanding LLM economics is crucial for businesses. Explore token pricing, scaling costs, prompt engineering, and tools like Vellum to optimize AI expenses and improve cost-efficiency in AI implementations

LLMs: An Operator’s View (theengineeringmanager.com, 2025-03-30). Organizations must integrate LLMs like ChatGPT and Cursor to enhance team productivity and efficiency, while preparing for impacts on hiring practices and code review processes due to accelerated coding capabilities

Benchmarks distract us from what matters (ehudreiter.com, 2025-03-26). A fixation on easy-to-measure benchmarks for LLMs may undermine their real-world utility, neglecting capabilities like emotional appropriateness while favoring metrics that interest fewer users, such as math problems

⚙️ System Design & Advanced Applications

Open-Sora 2.0 Explained: Architecture, Training, and Why It Matters (louisbouchard.ai, 2025-03-28). Open-Sora 2.0 utilizes FLUX and MM-DiT architectures to achieve high-quality text-to-video synthesis with significant cost efficiency, requiring only $200K while providing advanced motion modeling and high-resolution video generation

LLM-as-judge for enterprises: evaluate model alignment at scale (s46486.pcdn.co, 2025-03-26). Enterprises leverage LLM-as-Judge to evaluate AI outputs, improve alignment, reduce bias, and automate reviews, utilizing tools like Snorkel Flow and grounding evaluations with expert-verified data for reliable model assessments

Building the Operating System for AI Agents (thedataexchange.media, 2025-03-27). Chi Wang discusses AG2, an open-source 'agent OS' enabling multi-agent AI systems with diverse interaction patterns, robust orchestration, and real-world applications in various fields, enhancing efficiency in complex knowledge work

MCP: An Introduction to Agentic Op Support (trustedsec.com, 2025-03-28). Explore how agents, powered by Large Language Models, can automate tasks using tools like ldapsearch and smbclient, facilitating effective network analysis and contributing to improved cybersecurity operations

🎨 Creative & Multimodal Experiments

Comparing local large language models for alt-text generation (saeedesmaili.com, 2025-03-31). Saeed Esmaili discusses using local language models to generate alt text for 10,000 photos, sharing insights into photography and personal experiences with camera equipment while recommending Dries' follow-up posts

Adventures in pixel space (davidyat.es, 2025-03-30). David Yates explores AI image generation advancements, particularly DALL-E 3 and gpt4o, emphasizing autoregressive models over diffusion techniques, enabling precise edits and coherence in generated images

On AI, Art, Writing, and the Distillation of Creativity (adamcaudill.com, 2025-03-31). The discussion revolves around generative AI in art and writing, notably OpenAI's 4o model creating images, and explores the intricacies of creativity, intent, and the evolution of artistic forms like photography

🚀 Text-based Experiments & Case Studies

Running an LLM on a 12-inch PowerBook (512pixels.net, 2025-03-28). Andrew Rossignol runs LLM inference on a 2005 PowerBook G4, utilizing a 1.5GHz processor and 1GB RAM to successfully implement the TinyStories 110M Llama2 model, merging nostalgia with cutting-edge AI

LLM first impressions a few weeks in (rsdoiel.github.io, 2025-03-30). Exploring LLM coding reveals complexity and resource demands, highlighting tools like Gemma and coding practices that promote ergonomic benefits but raise sustainability concerns amidst the ongoing AI hype

Bootstrapping ranking models with an LLM judge (emiruz.com, 2025-03-30). Using 500 Hacker News titles, an article ranking model is bootstrapped through LLM-supplied labels, Ridge regression, and sentence transformer embeddings, achieving a Spearman correlation of 0.74 on preference-based ordering

Accelerating LLM Inference: Up to 3x Speedup on MI300X with Speculative Decoding (rocm.blogs.amd.com, 2025-03-27). Speculative decoding achieves up to 3x speedup in LLM inference on AMD's MI300X GPU, utilizing frameworks like vLLM and native PyTorch (gpt-fast) to enhance token generation efficiency

Revisiting a Past Project: Using an AI Agent to Improve Efficiency (8thlight.com, 2025-03-31). AI fatigue is prevalent, yet strategic GenAI application can enhance project efficiency, as demonstrated by a 2021 ConceiveAbilities project, showcasing tools like Python, BeautifulSoup, and OpenAI’s chat models

A Practical Experimentation of GraphRAG and Agentic Architecture With NeoConverse (medium.com/neo4j, 2025-03-27). Exploration of GraphRAG and agentic architecture reveals NeoConverse as an experimental application for grounding large language models with graph-based data, enhancing query accuracy and facilitating API-driven decision-making

Training an AI on Ancient Undeciphered Texts: What I Wish I DIDN’T Learn (tarakiyee.com, 2025-03-31). An exploration of training AI on ancient undeciphered texts like the Voynich Manuscript, using techniques like transformer models and custom embeddings, while challenging AI's understanding versus mere pattern recognition

It’s so easy to fool yourself (s-anand.net, 2025-03-30). The author evaluated the effectiveness of four LLMs—Grok 3, Claude 3.7 Sonnet, Gemini 2.5 Pro, and GPT 4.5—using a custom quotes arena app to assess preferences in journaling slogans

🔧 Building & Deploying

Building a local AI assistant with user context (tonisagrista.com, 2025-03-26). Using Ollama and Chroma DB, this project builds a local AI assistant that scrapes web content for a personalized experience, utilizing global documentation from Gaia Sky and the Retrieval-Augmented Generation concept

Build a DAX Code Generator AI LLM (datastud.dev, 2025-03-27). Develop a custom DAX code generation LLM using FastAPI and Vue, utilizing Anthropic's Claude model for context-aware DAX assistance and implementing real-time response evaluation

Deepsek Raspberry Pi (markusloecher.github.io, 2025-03-29). Deepseek debuts 'DeepPi-1', a large language model trained on a Raspberry Pi for just $35, using a revolutionary compression algorithm that condenses petabytes of data into 32 megabytes, outperforming GPT-4

Self-hosted Perplexity Clone (cool-as-heck.blog, 2025-03-29). Perplexica is an open source, self-hosted LLM-powered search engine utilizing a distilled Qwen model from DeepSeek for enhanced query processing and result summarization on a VPS setup

🛠 Technical Tutorials

debug-gym (simonwillison.net, 2025-03-31). Microsoft Research explores LLMs using Python debugger (pdb); Claude 3.7 Sonnet showed significant improvement in performance with debugger tools, achieving up to 52.1% accuracy in debugging tasks

Nomic Embed Code: A State-of-the-Art Code Retriever (simonwillison.net, 2025-03-27). Nomic's new CodeRankEmbed model, a compact code retrieval tool, integrates with llm-sentence-transformers and Symbex, enabling efficient embedding and searching of codebases through SQLite for enhanced functionality

Sharding Pgvector (pgdog.dev, 2025-03-26). Sharding pgvector improves indexing for vector databases in Postgres by distributing IVFFlat and K-means centroids across multiple machines, enhancing performance while managing large embeddings effectively

Notes on implementing Attention (eli.thegreenplace.net, 2025-03-26). Implement attention blocks in Python using Numpy, focusing on scaled self-attention, batched self-attention, and multi-head attention with detailed shape explanations and necessary functions like Softmax

TransformerLens Quick Reference (boristhebrave.com, 2025-03-29). TransformerLens is a Python library for mechanistic interpretability, featuring installation instructions and a cheat sheet on model creation, running examples, and working with activation caches and attention patterns

📖 Technical Guides & Evaluations

How to evaluate an LLM system (thoughtworks.com, 2025-03-26). Evaluating LLM systems involves probabilistic models requiring unique techniques, with tools like DeepEval and Giskard used for performance assessment, ensuring reliable outputs and guiding improvements through metrics and robust evaluation processes

On Programming with AI Assistants (crocidb.com, 2025-03-26). Bruno Croci discusses 'Vibe Coding' and the role of LLMs in programming, advocating treating them as junior devs for research and repetitive tasks, while emphasizing the importance of thoughtful integration

Show, Don’t Tell: A Llama PM’s guide to writing GenAI evals (ddmckinnon.com, 2025-03-31). A guide to writing GenAI evaluations, covering problem definition, eval creation, operationalization, and importance of interannotator agreement using tools like MMLU, Fleiss’s kappa, and human evaluators

Understanding the Tech Stack Behind Generative AI (towardsdatascience.com, 2025-04-01). Explore the tech stack behind generative AI, including foundation models like GPT-4, tools like PyTorch, layers such as Explainable AI, multimodal capabilities, and the infrastructure required for scaling, compute power, and deployment

GenAI: Things You Need to Know (That Your Parents Didn’t Teach You) (quansight.com, 2025-03-28). Generative AI's models, like LLMs and their limitations in comprehension, tokenization, quantization formats, and agentic workflows are explored, emphasizing their utility while clarifying misconceptions about their capabilities

📚 Academic Research I

Parameter-free KV cache compression for memory-efficient long-context LLMs (arxiv.org, 2025-03-27). ZeroMerge introduces a parameter-free approach to KV cache compression, enhancing memory efficiency for long-context LLMs while maintaining performance across various computational language tasks

First Look at Reasoning From Scratch: Chapter 1 (sebastianraschka.com, 2025-03-29). Chapter 1 recounts new reasoning methodologies in large language models (LLMs), focusing on chain-of-thought reasoning, inference-time scaling, and reinforcement learning, laying the groundwork for practical coding in upcoming chapters

Why Stop at One Error? Benchmarking LLMs as Data Science Code Debuggers for Multi-Hop and Multi-Bug Errors (arxiv:cs, 2025-03-28). DSDBench introduces a benchmark for evaluating LLMs in multi-hop error tracing and multi-bug detection in data science code, featuring 1,117 samples with 741 error pairs, enhancing AI-assisted debugging capabilities

L4: Diagnosing Large-scale LLM Training Failures via Automated Log Analysis (arxiv:cs, 2025-03-26). L4 is a log-based framework for diagnosing large-scale LLM training failures, focusing on automated extraction of log events and identifying patterns from hardware and user faults, improving diagnosis efficiency in Platform-X

Harnessing the Reasoning Economy: A Survey of Efficient Reasoning for Large Language Models (arxiv:cs, 2025-03-31). This survey analyzes reasoning economy in Large Language Models, focusing on causes of inefficiency, reasoning behavior patterns, and strategies for balancing performance with computational costs during post-training and inference stages

How Generative IR Retrieves Documents Mechanistically (arxiv:cs, 2025-03-25). Generative Information Retrieval (GenIR) utilizes transformer models for document ranking, revealing a tri-stage process involving priming, bridging, and interaction stages, with mechanistic interpretability tools like patching and vocabulary projections employed

Bridging the Dimensional Chasm: Uncover Layer-wise Dimensional Reduction in Transformers through Token Correlation (arxiv:cs, 2025-03-28). This work develops a geometric framework for analyzing token dynamics in Transformers, revealing a pattern of expansion and contraction, and suggesting effective models compress tokens into low-dimensional submanifolds resembling human semantic spaces

📚 Academic Research II

Exploring the Roles of Large Language Models in Reshaping Transportation Systems: A Survey, Framework, and Roadmap (arxiv:cs, 2025-03-27). LLM4TR framework categorizes Large Language Models in transportation into information processors, knowledge encoders, component generators, and decision facilitators, enhancing traffic prediction, autonomous driving, and urban mobility optimization through advanced capabilities

LLM-enabled Instance Model Generation (arxiv:cs, 2025-03-28). This work presents a two-step method for generating XMI-based instance models from Ecore metamodels and natural language, using LLMs like GPT-4o and Llama 3.1, improving usability in model generation tasks

Molecular Quantum Transformer (arxiv:cs, 2025-03-27). Molecular Quantum Transformer (MQT) utilizes quantum circuits for attention mechanisms, efficiently calculating ground-state energies in molecular systems like H_2 and LiH, outperforming classical methods and extending capabilities in quantum chemistry

A Comparative Analysis of Word Segmentation, Part-of-Speech Tagging, and Named Entity Recognition for Historical Chinese Sources, 1900-1950 (arxiv:cs, 2025-03-25). A comparison of LLMs like GPT-4o and Claude 3.5 against traditional NLP tools for word segmentation, POS tagging, and NER on historical Chinese texts reveals LLMs excel despite higher computational costs

You may also like

About Generative AI

Our Generative AI newsletter covers the latest developments, trends, tools, and insights in AI research, LLMs and agentic applications. Each week, we curate the most important content from over 50,000 blogs and news sites so you don't have to spend hours searching.

Whether you're a beginner or expert in generative AI, our newsletter provides valuable information to keep you informed and ahead of the curve in this rapidly evolving field.

Subscribe now to join thousands of professionals who receive our weekly updates!