🤖

Machine Learning Engineer: 14th October 2025

Newsletters sent once a week, unsubscribe anytime.

Published 14th October 2025

🔧 Company Engineering Blogs

Over Palantir (blog​.palantir​.com). Palantir uitleg over data ownership, privacy-by-design, governance, and ethics in AI, with ICE contract context and European data sovereignty

From Single-Node to Multi-GPU Clusters: How Discord Made Distributed Compute Easy for ML Engineers (discord​.com). Discord details building a Ray-based ML platform with CLI, Dagster + KubeRay orchestration, and X-Ray observability for multi-GPU training

Shipping Containers: How We Built an Easy to Use Jenkins Pipeline for ECR (eng​.wealthfront​.com). Centralized Jenkins-based container build pipeline with a builder pattern, ECR integration, and test coverage

Productivity Habits (engineering​.gusto​.com). Practical productivity habits for engineers: stay unblocked, breadth-first structuring, and strategic tooling including VSCode, Vim, and AI aids

Engineering Real-Time Multimodal AI Pipelines: Scaling File Processing to 50M Daily Uploads (engineering​.salesforce​.com). Real-time multimodal AI pipelines for 50M daily uploads: file processing, validation, base64 grounding, and cross-platform prompts

✍️ Careers & Perspectives

My Data Career Journey So Far (jordangoodman​.bearblog​.dev). From logistics to data analytics and backend development, leveraging Python, SQL, Pandas, FastAPI, Django, and cloud platforms

What Could Go Wrong? (matthiasott​.com). Explores backpropagation, AI hype, Hinton Stewart interview, and societal implications of large language models and AI progress

The Cat Paper (mbi-deepdives​.com). Explains the Cat Paper's shift to unsupervised learning at scale, GPUs for ML, and its impact on recommendation engines and tech giants

Interesting Interview: Python, Go, Rust, TypeScript and AI (garajeando​.blogspot​.com). Interviews and experiments across Python, Go, Rust, TypeScript, and AI with readings, videos, and links

🔒 LLM Security & Poisoning

Hardware Vulnerability Allows Attackers to Hack AI Training Data (ece​.ncsu​.edu). NC State researchers reveal GATEBLEED, a timing-based hardware vulnerability in AMX accelerators that leaks AI training data and routing decisions

LLM Poisoning [1/3] - Reading the Transformer's Thoughts (synacktiv​.com). Explores Transformer internals, FFN key–value memory, trigger detection in pre-down MLP activations, and causal tracing for hidden knowledge in LLMs

Poisoning Attacks on LLMs Require a Near-constant Number of Poison Samples (blog​.quintarelli​.it). Poisoning attacks on LLMs show a near-constant number of poisoned documents needed across model and dataset sizes

A small number of samples can poison LLMs of any size (anthropic​.com). 250 malicious documents can backdoor LLMs from 600M to 13B parameters, challenging data-proportional poisoning assumptions

🛠️ ML Platforms & Ops

Process Guardianship: The Most Valuable Data Engineering Work You’re Probably Not Doing (datakitchen​.io). Consolidate scattered business logic into a centralized, tested production pipeline using dbt, version control, and data governance

From Single-Node to Multi-GPU Clusters: How Discord Made Distributed Compute Easy for ML Engineers (discord​.com). Discord details building a Ray-based ML platform with CLI, Dagster + KubeRay orchestration, and X-Ray observability for multi-GPU training

GPUs, module upgrades and more site fixes (markjgsmith​.com). GPU acceleration for locally running containerized LLMs on macOS; refactor to unify plugins; updated site design and navigation

Building a 10-billion wallet crypto-intelligence platform: Elliptic’s journey with Amazon DynamoDB (aws​.amazon​.com). Elliptic builds a real-time, 10B-wallet crypto-intelligence graph on DynamoDB with GSIs, streams, and 0x-based terminal nodes

Use Amazon SageMaker HyperPod and Anyscale for next-generation distributed computing (aws​.amazon​.com). SageMaker HyperPod with Anyscale and Ray for scalable distributed AI training on EKS, with monitoring and cost optimization

🔧 Compute & Frameworks

All in on MatMul? Don’t Put All Your Tensors in One Basket! (sigarch​.org). Hardware lottery in AI; MatMul dominance, generality, and co-design for future hardware-software synergy

Deep Neural Networks and Julia (chasethedevil​.github​.io). Explores Julia neural networks for finance with Flux, comparing to SimpleChain, LUX, and PyTorch performance

Training Federated AI Models to Predict Protein Properties (developer​.nvidia​.com). Federated training with NVIDIA FLARE and BioNeMo to predict protein subcellular localization using ESM-2nv in a FAIR, privacy-preserving setup

🧪 Evaluation & Performance

Testing your models before you build them (hoyleanalytics​.org). Pre-training model-form tests: asymptotic behaviour, stress tests, known behaviours, coefficient ranges, and dimensional analysis for robust mathematical form

Importance of offline evaluation to guide model choice (tech​.olx​.com). OLX compares open embedding models with internal Item2Vec using MTEB benchmarks, fine-tuning, and offline evaluation for multilingual recall

Introducing Semi-automatic Performance Engineering (dsyme​.net). Semi-automatic Performance Engineering using GitHub Agentic Workflows, Daily Perf Improver, planning, Build Step Inference, and experimental AI agents (Claude, Codex, Copilot CLI) to improve performance across repos like FsMath and Z3

🔎 Data Projects & Analyses

The Art of Data Refinement: Severance Analyses (lucymcgowan​.com). Data extraction from Severance using elevator sounds, cepstral analysis, KNN, and text mining on episode scripts

New model reveals the intricate structure of everyday materials (news​.stanford​.edu). Stanford researchers map microstructure of concrete and sand using a Poisson model and multipoint correlations inspired by Battleship

Nested Forecasting with Spark: Blockchain ETF Trends (datageeek​.com). Predict blockchain ETF trends using nested forecasting with Spark; compares BCHN.L and IBLC via XGBoost and Prophet

Using disorder to reveal hidden objects (rootprivileges​.net). Fingerprint operator analyzes reflection matrices to locate objects in strongly scattering media using complex wave patterns

🧠 Transformers at Scale

Cross Talk (joecooper​.me). Markov text generation, DeBERTa-based reranking, and OCR-like text sorting for multiturnChat on a 3090, OpenSubtitles data, and bespoke quality-control models

modded-nanogpt world record: Decoupling embedding size from model dimension (snimu​.github​.io). Modded-NanoGPT uses multiple input embeddings with learned layer-wise weights to decouple embedding size from model dimension

KV Cache Optimization via Multi-Head Latent Attention (pyimagesearch​.com). KV Cache optimization with Multi-Head Latent Attention (MLA) reduces KV cache memory in transformers for long-context inference

NVIDIA and SGLang Accelerating SemiAnalysis InferenceMAX and GB200 Together (lmsys​.org). NVIDIA Blackwell optimizations enable SGLang and GB200 deployment for Prefill-Decode disaggregation and MoE parallelism up to 26k/13k tokens per second per GPU

Recurrence and Attention for Long-Context Transformers with Jacob Buckman - #750 (twimlai​.com). Long-context transformers with Jacob Buckman; windowed attention, grouped query attention, latent space attention, Power Retention, and Vidrial/PowerCoder open-source projects

🧬 Alternative Generative Models

Revisiting Karpathy’s 'The Unreasonable Effectiveness of Recurrent Neural Networks' (gilesthomas​.com). Explores Karpathy's 2015 RNN post, contrasts vanilla RNNs with LLMs, discusses byte-level inputs, training via truncated BPTT, and PyTorch vs Lua Torch implementations

I invented a new generative model and got accepted to ICLR (discrete-distribution-networks​.github​.io). Discrete Distribution Networks (DDN): a hierarchical discrete generative model with multi-sample outputs, zero-shot conditional generation, and CLIP-guided conditioning

“Linear Regression with Two-Way Interactions Using JavaScript” in Visual Studio Magazine (jamesmccaffreyblog​.com). Discusses linear regression with two-way interactions in JavaScript, SGD training, model weights, and Visual Studio Magazine demo

📊 Statistical Modeling & Info

k-Nearest Neighbor density (logarithmic​.net). Explores k-Nearest Neighbor density estimation with adaptive Gaussian splats based on kth nearest neighbor distances

How many valid UTF-8 (or UTF-16, or UTF-32) byte sequences are there? (qntm​.org). How many valid UTF-8, UTF-16, and UTF-32 byte sequences exist, with recurrences and eigenvalues discussed

Brains, minds and machines: A new algorithm for decoding intelligence (news​.engineering​.utoronto​.ca). Researchers develop a convex-optimization framework to minimize negative transfer in brain decoding using a generalized method of moments-based mixture model

Independent Component Analysis (hclimente​.github​.io). Independent Component Analysis explained via the cocktail party problem, non-Gaussianity, whitening, deflation, and FastICA notes

📈 Time Series & Risk

Book Review: Time Series Forecasting using Foundation Models (sujitpal​.blogspot​.com). Book review surveys seven Foundation Models for time series forecasting, with zero-shot, fine-tuning, probabilistic forecasts, anomaly detection, and a capstone project

Polars helps coping with black swan events at La Mobilière (pola​.rs). La Mobilière scales simulations with Polars DataFrames to model catastrophic years and compute TVaR across lines of business

Native uncertainty quantification for time series with NGBoost (thierrymoudiki​.github​.io). Native uncertainty quantification for time series using NGBoost with nnetsauce and cybooster for probabilistic forecasting

🧮 Linear Algebra Core

Explicit Lossless Vertex Expanders! (gilkalai​.wordpress​.com). Explicit, constant-degree lossless vertex expanders via Ramanujan cubical complexes and base graphs, with group actions and LDPC-code implications

Notes - NLA MT25, Singular value decomposition (ollybritton​.com). Overview of SVD, eigenvalue connections, and properties of A = UΣVᵀ across symmetric, unitary, skew-symmetric, normal, and triangular matrices, with low-rank approximations

Notes - NLA MT25, Courant-Fischer minmax theorem (ollybritton​.com). Courant-Fischer minmax theorem for symmetric matrices and its relation to singular values, subspace optimization, and Weyl’s inequality

📚 Academic Research

pyGinkgo: A Sparse Linear Algebra Operator Framework for Python (arxiv:cs). Pythonic interface to Ginkgo delivering portable, high-performance sparse linear algebra across CUDA/HIP/OpenMP. Benchmarks beat SciPy/CuPy/Torch; parity with C++. Useful backend for SpMV and iterative solvers

A Model-Driven Engineering Approach to AI-Powered Healthcare Platforms (arxiv:cs). Hardware–software co-design accelerates differentially private training with correlated noises. Characterization reveals overheads; Cocoon precomputes/coalesces noise and adds NMP, achieving 1.55–10.82× speedups on models with embeddings

Gradient-Guided Furthest Point Sampling for Robust Training Set Selection (arxiv:stat). Introduces gradient-guided furthest point sampling using force norms to select configurations. Outperforms FPS and uniform sampling on MD17, improving accuracy and efficiency for molecular ML

TabPFN-Wide: Continued Pre-Training for Extreme Feature Counts (arxiv:cs). Continues pretraining prior-data fitted networks on synthetic priors to handle 50k+ features. Matches or exceeds performance, robust to noise, maintaining interpretability for high-dimensional tabular data

Learning Mixtures of Linear Dynamical Systems (MoLDS) via Hybrid Tensor-EM Method (arxiv:cs). Tensor–EM framework learns mixtures of linear dynamical systems with identifiability and robust estimation, combining tensor moments with Kalman EM. Demonstrated on synthetic and neural datasets

👋 Before you go

Blaze newsletters will soon be moving to substack as the main email delivery service. This is primarily to make managing subscriptions, sending email and archived newsletters more streamlined - there will be no change to your newsletters, they will continue to be completely free and you will be able subscribe and unsubscribe just as easily as before.

Blaze's sister site https://blognerd.app, a search engine for blogs and posts, has had a major makeover, and is a good place to search for smart, independent writing.

Finally, if you get value from your newsletter, please consider supporting me by joining the patreon page at patreon.com/blazeemail. Becoming a patron helps me to cover my costs and to keep blaze going so everyone can enjoy the newsletters for free.

You may also like

About Machine Learning Engineer

Our Machine Learning Engineer newsletter covers the latest developments, research papers, tools, and techniques in ML engineering and deployment. Each week, we curate the most important content so you don't have to spend hours searching.

Whether you're a beginner or expert in machine learning engineering, our newsletter provides valuable information to keep you informed and ahead of the curve in this technically challenging field.

Subscribe now to join thousands of professionals who receive our weekly updates!