I just attended the 2019 Toronto Machine Learning Summit (TMLS) last week - it was a great experience.The community was welcoming,the content was relevant, and it was well organized.
One thing that I appreciated about the event is how down-to-earth it felt. Many conversations started with the person that I sat beside during a session, which doesn't happen often at conferences. People had a range of experience with Machine Learning, but everyone came from a genuine place to learn more about the topic. It's quite different from other conferences that I've attended where the marketing glamour is cranked up - leading to a mass-produced approach, hit-or-miss content, and participants feeling more distant.
I haven't personally met the founder of TMLS, David Scharbach, but we've exchanged a few messages online and he was always welcoming & open. I think the TMLS team has built this community one relationship at a time and that this conference's culture is a reflection of those values.
The conference speakers are broken out into three different streams: Business, Applied Case Studies & ML In Production, Advanced Technical. The sessions I attended were in the Applied Case Studies and Advanced Technical streams. Some of the Advanced Technical sessions were way over my head in terms of understanding, but interesting nonetheless.
I try to temper my expectations after watching session after session of companies working on the cutting-edge, and relating it back to my daily work. These sessions are an ideal state to work towards. I know that they're more of the movie trailer rather than the whole film in that I'm watching the best parts.
Explainable AI (XAI) as a Recurring Theme
One theme that I saw often was the concept of Explainable AI (XAI). As a quick background, certain machine learning models are considered more of a White Box or Black Box, where a White Box model is being able to understand which features were responsible for a prediction result. XAI is a business requirement because people struggle with handing a model decision-making ability without understanding the rationale. I think this thinking exists for two reasons, only one of which is valid.
The first reason is insecurity. We're trained from an early age in school to show your work when explaining how we arrived at a solution. We're even told that the how you arrived at the answer is more important than the actual answer because it needs to show a repeatable process that's not based on luck. A Black Box model approach throws this notion out the window that the how is important.
The second, more valid, reason is risk management. When a manager delegates work, they are effectively trusting the judgement of their employee. A manager can provide feedback to an employee if his judgement in a task was incomplete. How can a manager understand a model will consistently make good decisions if they can't understand the features that led to the prediction?
I think the balanced approach is to determine how material of a decision the model is predicting. The more material the decision, the more XAI becomes important.
In summary, I'm glad I went to TMLS. It was fun and I recommend it to anyone who's interested in Machine Learning.