Writing good Jupyter notebooks
Write Jupyter notebooks that are easy to follow, easy to understand, flexible, and resilient.
Text summarization with large language models (LLMs)
Using LLMs to summarize GitHub issues as a learning exercise: the importance of a good prompt, what can go wrong, and how to fix it.
Don't use (only) accuracy to evaluate your model
Why accuracy is not a good metric to evaluate your model, and what to use instead.
Improve writing by learning how to read
Turning the advice on the How to Read a Paper article around to improve writing.
Machine learning interpretability with feature attribution
A review of feature attribution, a technique to interpret model predictions. First, it reviews commonly-used feature attribution methods, then demonstrates feature attribution with SHAP, one of these methods.