Hi, I’m Tilman

Ph.D. student of deep learning at TU Berlin. On this personal blog I will write about deep learning and coding topics that keep me up at night. Obviously there will be updates on my ongoing research, too.

Compose Datasets, Don't Inherit Them

In relatively young disciplines, like deep learning, people tend to leave behind old principles. Sometimes this is a good thing because times have changed and old truths, i.e. over-completeness being a bad thing, have to go. Other times, such old principles stick around for a reason and still people over-eagerly try to throw them out of the window. I am no exception in this regard so let me tell you how I “re-learned” the tried and true design pattern of “Composition over Inheritance”....

30 January 2022 · 13 min

About Copying Blindly - Insights of the One-Eyed Person

Today I have no fancy project and no shiny GitHub repository to show because today I want to talk about my research. As some may read in my bio, I am doing my PhD in the field of predictive maintenance (PDM). This field is directly adjacent to machine learning and fell, like many others, into the grasp of the deep learning hype. Unfortunately, long series of sensor readings are not as interesting to look at as images or intuitively understood as natural language, so PDM is not as present in the mind of the general ML crowd....

7 April 2021 · 9 min

The Great Autoencoder Bake Off

“Another article comparing types of autoencoders?”, you may think. “There are already so many of them!”, you may think. “How does he know what I am thinking?!”, you may think. While the first two statements are certainly appropriate reactions - and the third a bit paranoid - let me explain my reasons for this article. There are indeed articles comparing some autoencoders to each other (e.g. [1], [2], [3]), I found them lacking something....

24 January 2021 · 23 min

Make DL4J Readable Again

A while ago, I stumbled upon an article about the language Kotlin and how to use it for Data Science. I found it interesting, as some of Python’s quirks were starting to bother me and I wanted to try something new. A day later, I had completed the Kotlin tutorials using Kotlin Koans in IntelliJ IDEA (which is an excellent way to get started with Kotlin). Hungry to test out my new language skills, I looked around for a project idea....

20 September 2020 · 13 min

How to Trust Your Deep Learning Code

Deep learning is a discipline where the correctness of code is hard to assess. Random initialization, huge datasets and limited interpretability of weights mean that finding the exact issue of why your model is not training, is trial-and-error most times. In classical software development, automated unit tests are the bread and butter for determining if your code does what it is supposed to do. It helps the developer to trust their code and be confident when introducing changes....

1 August 2020 · 27 min