Tagged

Algorithm

Sep 22, 2025 Standalone 10 min read

Low-Rank Matrix Approximation and the Pseudoinverse: From SVD to Regularization

From the least-squares view to the Moore-Penrose pseudoinverse, the four Penrose conditions, computation via SVD, truncated SVD, Tikhonov regularization, and modern applications from PCA to LoRA.

Jul 24, 2025 Standalone 14 min read

Reparameterization Trick & Gumbel-Softmax: A Deep Dive

Make sense of the reparameterization trick and Gumbel-Softmax: why gradients can flow through sampling, how temperature trades bias for variance, and the practical pitfalls of training discrete latent variables …

Dec 15, 2024 Time Series Forecasting 17 min read

Time Series Forecasting (8): Informer -- Efficient Long-Sequence Forecasting

Informer reduces Transformer complexity from O(L^2) to O(L log L) via ProbSparse attention, distilling, and a one-shot generative decoder. Full math, PyTorch code, and ETT/weather benchmarks.

Nov 30, 2024 Time Series Forecasting 18 min read

Time Series Forecasting (7): N-BEATS -- Interpretable Deep Architecture

N-BEATS combines deep learning expressiveness with classical decomposition interpretability. Basis function expansion, double residual stacking, and M4 competition analysis with full PyTorch code.

Nov 15, 2024 Time Series Forecasting 18 min read

Time Series Forecasting (6): Temporal Convolutional Networks (TCN)

TCNs use causal dilated convolutions for parallel training and exponential receptive fields. Complete PyTorch implementation with traffic flow and sensor data case studies.

Oct 31, 2024 Time Series Forecasting 13 min read

Time Series Forecasting (5): Transformer Architecture for Time Series

Transformers for time series, end to end: encoder-decoder anatomy, temporal positional encoding, the O(n^2) attention bottleneck, decoder-only forecasting, and patching. With variants (Autoformer, FEDformer, Informer, …

Oct 16, 2024 Time Series Forecasting 12 min read

Time Series Forecasting (4): Attention Mechanisms -- Direct Long-Range Dependencies

Self-attention, multi-head attention, and positional encoding for time series. Step-by-step math, PyTorch implementations, and visualization techniques for interpretable forecasting.

Oct 1, 2024 Time Series Forecasting 11 min read

Time Series Forecasting (3): GRU -- Lightweight Gates and Efficiency Trade-offs

GRU distills LSTM into two gates for faster training and 25% fewer parameters. Learn when GRU beats LSTM, with formulas, benchmarks, PyTorch code, and a decision matrix.

Sep 16, 2024 Time Series Forecasting 9 min read

Time Series Forecasting (2): LSTM -- Gate Mechanisms and Long-Term Dependencies

How LSTM's forget, input, and output gates solve the vanishing gradient problem. Complete PyTorch code for time series forecasting with practical tuning tips.

Sep 1, 2024 Time Series Forecasting 13 min read

Time Series Forecasting (1): Traditional Statistical Models

ARIMA, SARIMA, VAR, GARCH, Prophet, exponential smoothing and the Kalman filter, derived from a single state-space view. With Box-Jenkins workflow, ACF/PACF identification, and runnable Python.

Oct 15, 2023 Standalone 14 min read

Kernel Methods: From Theory to Practice (RKHS, Common Kernels, and Hyperparameter Tuning)

Understand the kernel trick, RKHS theory, and practical kernel selection. Covers RBF, polynomial, Matern, and periodic kernels with sklearn code and a tuning flowchart.

Sep 20, 2023 Standalone 6 min read

Position Encoding Brief: From Sinusoidal to RoPE and ALiBi

A practitioner's tour of Transformer position encoding: why attention needs it at all, how sinusoidal/learned/relative/RoPE/ALiBi schemes differ, and which one to pick when long-context extrapolation matters.

Aug 26, 2023 Standalone 13 min read

Variational Autoencoder (VAE): From Intuition to Implementation and Troubleshooting

Build a VAE from scratch in PyTorch. Covers the ELBO objective, reparameterization trick, posterior collapse fixes, beta-VAE, and a complete training pipeline.

Mar 13, 2023 Standalone 19 min read

Learning Rate: From Basics to Large-Scale Training

A practitioner's guide to the single most important hyperparameter: why too-large LR explodes, how warmup and schedules really work, the LR range test, the LR-batch-size-weight-decay coupling, and recent ideas like WSD, …

Dec 27, 2022 Standalone 13 min read

Lipschitz Continuity, Strong Convexity & Nesterov Acceleration

Three concepts that demystify most of optimization: Lipschitz smoothness fixes the maximum step size, strong convexity sets the convergence rate and uniqueness of the minimizer, and Nesterov acceleration replaces kappa …

Dec 9, 2022 Standalone 10 min read

Optimizer Evolution: From Gradient Descent to Adam (and Beyond, 2025)

One article that traces the full lineage GD -> SGD -> Momentum -> NAG -> AdaGrad -> RMSProp -> Adam -> AdamW, then onwards to Lion / Sophia / Schedule-Free. Each step is framed by the specific failure of the previous …

Jul 25, 2022 Standalone 17 min read

Proximal Operator: From Moreau Envelope to ISTA/FISTA and ADMM

A systematic walk through the proximal operator: convex-analysis basics, the Moreau envelope, closed-form proxes, and how they power ISTA, FISTA, ADMM, LASSO, and SVM in practice.