A Recipe For a Robust Model Development Process

A model might fail but still produce some output. We might not even get a signal of why or where our model fails. And there are many reasons for a model to fail. A model can fail due to bad data…

Graphs in Motion: Spatio-Temporal Dynamics with Graph Neural Networks

Interconnected graphical data is all around us, ranging from molecular structures to social networks and design structures of cities. Graph Neural Networks (GNNs) are emerging as a powerful method of…

Tuning Word2Vec with Bayesian Optimization: Applied to Music Recommendations

Discover how we applied Bayesian optimization to tune Word2Vec hyperparameters for a large-scale music recommendation system, boosting hit rate and NDCG metrics

Inside Jamba: Mamba, Transformers, and MoEs Together to Power a New Form of LLMs

Transformer architectures have been the dominant paradigm in LLMs leading to exceptional advancements in research and development. The question of whether transformers will be the final architecture…

How to Make Cyberpunk “Dark Mode” Data Visualizations in Python

I’ve always loved a dark background on a chart with neon lines for their aesthetic, and also their improved accessibility for certain types of vision impairments — in this article we’ll be discussing…

Building Local RAG Chatbots Without Coding Using LangFlow and Ollama

Frameworks like LangChain have definitely streamlined development, but hundreds of lines of code can still be a hurdle for those who aren’t programmers. ⁤ That’s when I discovered “Lang Flow,” an…

Python List Comprehension Is Not Just Syntactic Sugar

Python list comprehension is not just a syntactic sugar. Its performance is better than for loop because of the global namespace and local variables.

Understanding LangChain Chains for Large Language Model Application Development

One of the fundamental pillars of LangChain, as implied by its name, is the concept of “chains.” These chains typically integrate a large language model (LLM) with a prompt. Through these chain…

JAMBA, the First Powerful Hybrid Model is Here

JAMBA is the first-ever production-grade hybrid LLM. Combining several breakthroughs, it manages to create a great model that is extremely efficient. But how?

Supervised and Unsupervised: What’s the difference?

With machine learning’s surge of popularity in the past few years, more and more people spend hours each day trying to learn as much as they can. The field attracts avid learners, with companies…