The Math Behind Neural Networks

Dive into Neural Networks, the backbone of modern AI. Understand its mathematics, implement it from scratch, and explore its applications.

Track Your ML Experiments

You know the drill. You get a dataset, load it into a Jupyter notebook, explore it, preprocess the data, fit a baseline model or two, and then train an initial final model, such as XGBoost. The first…

A Benchmark and Taxonomy of Categorical Encoders

A large share of datasets contain categorical features. For example, out of 665 datasets on the UC Irvine Machine Learning Repository [1], 42 are fully categorical and 366 are reported as mixed…

Top Important LLM Papers for the Week from 18/03 to 24/03

Large language models (LLMs) have advanced rapidly in recent years. As new generations of models are developed, researchers and engineers need to stay informed on the latest progress. This article…

medium.com_towards-ai 12 hours ago

Where Do EU Horizon H2020 Fundings Go?

Where Do EU Horizon H2020 Fundings Go?. Combining explorative data analytics, geospatial data, and network science in Python to overview 35k+ EU-funded projects..

towardsdatascience.com 13 hours ago

Amping up Forecasts Time Series: Signature Transformation Method in Python. Part 1

Friends, I am so excited to share with you a truly amazing invention of Stochastic Process Math— the Signature of Time Series! This is a highly complex issue with a lot of detail to cover, so this…

medium.com_towards-ai 14 hours ago

Learn AI Together — Towards AI Community Newsletter #18

Good morning, AI enthusiasts! This week, I’m super excited to announce that we are finally releasing our book, ‘Building AI for Production; Enhancing LLM Abilities and Reliability with Fine-Tuning…

medium.com_towards-ai 16 hours ago

Building Blocks of Transformers: Attention

Explaining attention using a simple money borrowing scenario.

medium.com_towards-ai 16 hours ago

Unlocking the Secrets of AI Mind Reading

Recently, I have been researching large language models with the goal of using them to score samples. Initially, I attempted to have the model rate on a scale from 1 to 5, but it consistently gave…

medium.com_towards-ai 17 hours ago

Retrieval-Augmented Generation, aka RAG — How does it work?

In the context of Large Language Models (LLMs), RAG stands for Retrieval-Augmented Generation. RAG combines the power of retrieval systems with the generative capabilities of neural networks to…

medium.com_towards-ai 17 hours ago