Recap: ML Conference 2019 in Munich
On 17.06. another round of the semi annual ML Conference started in Munich. As usual, it started with a day-long workshop with joint live coding, giving the participants an approachable introduction into Machine Learning and Deep Learning.
The second and third conference days were filled to bursting with lectures. As always, the main problem was the agony of choice, as there was always a broad selection of three to four parallel talks to choose from. Despite several hundred participants it was never too crowded due to the suitable location and enough parallel sessions.
This time we were represented with two sessions, a short talk about TensorFlow Training on the Java Virtual Machine and an introduction to autoencoders. On the last day, our CTO Christoph Henkelmann was also once again on stage for the expert panel.
Day 1: Hands-on-Workshops & Speaker’s Dinner
As a conference geared towards practitioners, the program started off with intensive workshops that allowed testing out everything learned immediately.
For beginners there was a basic course in Machine Learning with Pieter Buteneers, which enabled everyone with basic programming knowledge to learn the most important principles of ML in one day. Thanks to Pieter’s extensive practical experience, it was an ideal kickoff to get into the subject of ML.
As an alternative for those with more basic knowledge, Xander Steenbrugge and Frederic Stallaert offered a practical workshop on deep learning. The participants were able to train their own image classifiers with Keras, one of the most important Deep Learning frameworks today.
The most exciting part of the first evening for us was of course the Speaker’s Dinner, where we were able to network and discuss with the other lecturers in a cosy Bavarian beer garden about our lectures and current AI trends.
Day 2: Opening, Talks and Casino Flair
The opening session of the first lecture day started with a contrast to the rest of the program, which was more focused on technology and implementation. With the topic “The Ethics of AI” a useful counterpoint was set right at the beginning of the conference.
From then on, it was time to get down to business in five parallel slots. Right at the beginning one could learn about the state of the art of one of the classical AI applications – chess computers. Speaker Oliver Zeigermann started in the classical way with the functionality of a symbolic chess AI and then showed how to let an AI learn a game by itself with Monte-Carlo Search. At the same time the basics of reinforcement learning were explained by Christian Hidber. Those who always wanted to know how algorithms learn by experimenting with trial and error was right here.
The second slot was about using ML methods in a production environment. The talk by Paul Dubs for example, who brought a whole hit list of the worst problems (and how to avoid them) when deploying and maintaining machine learning models. Due to his experience from working at Skymind, the producer of the leading deep learning framework for JAVA, Paul could give valuable advice from the real world.
Between the talks there were always long enough breaks to catch some air and to discuss what had been heard. As you cannot learn on an empty stomach, there was at any time extensive selection of drinks and tasty catering. Thanks to short waiting times at the buffet there remained enough time to network and discuss, which was at least as exciting as the sessions themselves.
Chatbots are an exciting topic at the moment, which is why there were two presentations at this ML Conference about them. In the first Christoph Windhäuser reported on several projects with several different chatbot frameworks and how he and his team used them to implement various chatbots.
Christoph’s Talk #1 – Training Neural Networks on the JVM
In the last time slot of the day it was our turn: In a short lecture Christoph showed how Google’s Deep Learning Framework TensorFlow can also be used on the JAVA Virtual Machine to train neural networks.
For running TensorFlow models on the JVM we already have a series of blog articles. In this lecture we went one step further, because here we showed how to also train the model under JAVA. For the very impatient there is a super short summary of the lecture in the video here (German only).
At the Evening Event the participants could relax with a cool beer, roulette and blackjack and discuss the lectures of the day.
Du konntest nicht persönlich an Christoph Henkelmann’s (@divisio_ai ) gestrigem #TensorFlow training on #JVM Shorttalk auf der #MLConference teilnehmen? Dann gibt es jetzt hier die wichtigsten Takeaways davon: #ai #java #bigdata #machineLearning #ml #datascience pic.twitter.com/mTlkKlcJko
— ML Conference (@mlconference) June 19, 2019
Day 3: Even more talks, Even more Chatbots
Our second lecture followed directly the next morning with one of our favourite topics: “Unsupervised Learning with Autoencoders“. In this talk we explained how neural networks can also learn something if only raw data is available, i.e. if the data is not classified or annotated. If you want to learn more about the difference between supervised and unsupervised learning, you can do so in the 5th part of our “Understanding AI” series.
The rest of the morning was of course filled with a rich selection of talks on topics such as AutoML, Evolutionary Algorithms and tricking facial recognition.
The expert panel after the lunch break, which was also attended by Christoph, focused on how ML projects in companies with little or no machine learning experience can be successfully brought into production. The audience was able to ask the panel questions. Unfortunately, too few participants made use of this opportunity.
In addition to many other exciting lectures, the talk by Vladimir Rybakov had a topic that is also close to our hearts: The pros and cons of preprocessing in deep learning projects. In addition to a very useful introduction to preprocessing methods, Vladimir showed which of these had the most – and the least – influence on the quality of a real model.
One of the last presentations was about chatbots again. As CTO of Chatlayer, Pieter Buteneers could give brutally honest insights into the limitations of the current state of chatbot technology and presented an overview of the practice of chatbot development. Most important insight here: Chatbots still need a lot of manual work and pre-determined rules and resemble more the dialogue tree from a point-and-click game than an autonomous conversation partner.
After that, unfortunately, it was time to say goodbye. But we will also be at the next ML Conference in Berlin this winter. You are welcome to follow us on Twitter, we will keep you up to date about planned lectures at conferences and the latest blog articles. And we are looking forward to meeting you there or at another conference!