Confirmed sessions 2022
Stay tuned for the full agenda

Sub-surface Defects Detection During Manufacturing Through Sound-based Machine Learning Approach at Hindustan Shipping Limited

Sub-surface Defects Detection During Manufacturing Through Sound-based Machine Learning Approach at Hindustan Shipping Limited

Track:

Summary:

The sound-based machine learning solution developed for Hindustan Shipping Limited helps identify defects that are sub surface or interior to the part. The identification of defect is real time, during the part production. Thus, enabling one to take actions immediately and not wait to produce a scrap part. Takeaways of this presenation are:(1) insights into the sound based machine learning approach for sub surface and interior defect detection; (2) how to identify the location and magnitude of defect in real time.

Machine Learning at Production in the Real World: Chances & Limits, Challenges & Solutions

Speakers:

Walter Huber

Machine Learning at Production in the Real World: Chances & Limits, Challenges & Solutions

Summary:

Artificial intelligence and machine learning are well known in industry. When we look on production side, however real used solutions are very limited. In this table discussion, we like to discuss reasons for the actual situation. A blocking point for a rollout is that for business people machine learning and AI is hard to understand. On the other side, heuristics often generate similar results like complex machine learning algorithms and are much easier to understand for non data scientists. So, is machine learning just a hip? Also we will discuss another critical questions, because in general there two ways of implementing solutions: make or buy. So what is the best way in what situation?

Is AI Too “Spooky” to be Trusted? – Explainable AI with semantic networks at ZF, Knauf & Co.

Speakers:

Britta Hilt

Is AI Too “Spooky” to be Trusted? – Explainable AI with semantic networks at ZF, Knauf & Co.

Track:

Summary:

Can you trust “artificial brains” and thus, “artificial” decisions or recommendations? Well-known deep learning with neuronal networks is normally a black box – thus, people cannot understand why AI decided this way. Therefore, new AI methods are on the rise: deep learning with semantic networks. This explainable AI makes subject matter expert understand and thus, trust AI. In this presentation, case studies are shown from discrete manufacturing (i.e. ZF) and process industry (i.e. Knauf).

Developing the 2nd Generation of AIML Models for Demand Planning at Beiersdorf AG

Developing the 2nd Generation of AIML Models for Demand Planning at Beiersdorf AG

Track:

Summary:

International FMCG manufacturer Beiersdorf needs to forecast 1000s of products every month . In 2021, 10 years after the 1st Neural Networks were introduced, Beiersdorf set out to improve automatic forecasting further by reviewing the latest developments in technology. Surprisingly, some of the most recent and hyped algorithms such as DeepLearning, XGBoost, Prophet, BSTS and others did not perform well, but simple AI-methods customised to their data properties improved accuracy significantly.

Machine Learning Techniques to Preempt IPTV Service Downtime with Time Series Anomaly Detection on DSLAM Systems at Telefonica

Machine Learning Techniques to Preempt IPTV Service Downtime with Time Series Anomaly Detection on DSLAM Systems at Telefonica

Track:

Summary:

Telefónica, the biggest Spanish telecommunications company, asked us to provide a machine learning solution capable of detecting when one of their DSLAMs has an anomaly in registered customers indicating a loss in customer IPTV service. You will be shown how to deal with thousands of time series data by combining clustering algorithms, smoothing methods and deep learning tools to obtain efficient and high-performance results.

Machine Learning with Humans for Cyber Security: Integrating Experts into the Learning Process

Machine Learning with Humans for Cyber Security: Integrating Experts into the Learning Process

Track:

Summary:

Human In The Loop (HITL) is a process in which, as part of the Machine Learning (ML) workflow, experts are asked their opinion about the model’s predictions in order to improve it. We’ll discuss how we created a mechanism to automatically predict the best security policies for network DDoS to protect our customers, and explain how we integrated security experts into our ML process, in order to both optimize the labeling of security policies, and move to production quickly with minimum risk.

Taking Data-Driven Process Optimization to the Next Level at Bitburger

Taking Data-Driven Process Optimization to the Next Level at Bitburger

Track:

Summary:

A malt yield forecast with an excellent prediction performance, as well as first transfers to Augustiner Bräu, were successfully implemented to optimize the beer brewing process. The crucial next step is getting our ready-to-use analysis modules with built-in requirements into the running production. For this, we are creating an architecture for robust deployment, considering model resilience and automated detection of data drift and performance decay to eventually trigger new model training.

Implementing a Predictive Maintenance System for Trumpf Laser

Speakers:

Oliver Bracht

Implementing a Predictive Maintenance System for Trumpf Laser

Track:

Summary:

By predicting problems the laser machine availability can be increased significantly. This will not only reduce the costs of the maintenance. Started as a pure condition monitoring portal, the project for Trumpf Laser evolved into a hollistic predictive maintenance system, which allows facilitating the work of other departments (e.g. customer support).It also was the starting point for a new service: proactive support. Those practical examples shows the importance of empowering data-driven intelligence for machine manufacturers.

Use of PLC Data for Early Detection of a Serious Production Failure at Kampf

Speakers:

Niklas Haas

Use of PLC Data for Early Detection of a Serious Production Failure at Kampf

Track:

Summary:

Machines from Kampf Schneid- und Wickeltechnik GmbH & Co. KG are used worldwide for winding and cutting a wide variety of materials. A tear-off of the flow materials during the production process means an expensive loss of production. We present an approach on how an “AI” can detect an imminent tear-off at an early stage. We present (1) the winding machine, process and infrastructure for data collection (2) the ML pipeline for unsupervised pattern recognition and (3) the business case briefly.

Data-driven, Networked Quality Management in the Business Unit Laundry at MIELE

Data-driven, Networked Quality Management in the Business Unit Laundry at MIELE

Track:

Summary:

Within the research project AKKORD, MIELE, IPS and RapidMiner are working on developing a modular expandable and holistic reporting and analysis system that creates transparency about the present and future quality situation. In field data management, quality analyses are enabled by setting up standardized analysis modules and user-specific dashboards. MIELE designs the system to measure, monitor and forecast various KPIs to holistically improve the business unit’s quality management. The implementation at MIELE in particular shows the direct application in a business area where the use of predictive analytics is effective.

Predictive Subscription Lifecycle Marketing at DIE ZEIT

Predictive Subscription Lifecycle Marketing at DIE ZEIT

Summary:

Description: A newspaper subscription is defined by various critical events, ranging from the end of the trial subscription to receiving invoices. Based on predictive analyses that anticipate customer behavior during these events, we develop, test, and implement customized marketing interventions covering the whole subscription lifecycle. You will learn about modeling via a custom AutoML-pipeline and its close intertwining with marketing execution that aim to maximize subscription lifetime value at DIE ZEIT.

How Data Science Assists Volkswagen in Benchmarking and Identifying Similar Work Plan Descriptions

How Data Science Assists Volkswagen in Benchmarking and Identifying Similar Work Plan Descriptions

Track:

Summary:

Assembling a car is a complex task consisting of many steps usually grouped and organized in work plans. Based on the car model and its specifications, creating a key performance indicator (KPI) optimized work plan can be very time consuming. This case study at Volkswagen shows how data science can assist and speed up this process. After using various text analytics methods to identify similar work plans descriptions, a semi-automated benchmarking approach provides a KPI-driven recommendation.

The Rise of AI for Space – Learning from Earth Observation Data to Understand Our Planet

The Rise of AI for Space – Learning from Earth Observation Data to Understand Our Planet

Summary:

New streams of earth observation (EO) data (e.g. from Copernicus and New Space missions) lead to a far more comprehensive picture of our planet. These new global data on our planet offer new possibilities for scientists to advance our understanding of the Earth System. It also represents new opportunities for entrepreneurs to turn big data into new types of information services. In this table discussion, we discuss the chances and challenges as well as use cases and resources (tools, methods, data sets) for AI4EO.

Building Trust as a Service: A Shared Responsibility Approach for Data Platforms at CentralNic Group

Speakers:

Mirco Pyrtek

Building Trust as a Service: A Shared Responsibility Approach for Data Platforms at CentralNic Group

Summary:

Building a reliable single source of truth in a company typically comes with the following dilemma: focusing the responsibility around a dedicated data engineering team (data lake) versus distributing the ownership among the product engineering teams (data mesh). This talk will focus on lessons learned from implementing a shared responsibility model for the trusted flow of information within a data platform.

Markov-based Predictive Quality Analytics for Mass Lens Production at ZEISS

Markov-based Predictive Quality Analytics for Mass Lens Production at ZEISS

Summary:

Quality improvement for mass production lines is an ongoing topic for many years. The target is to reduce the rate of defects and scrap during production having an impact on sustainability, delivery time and cost. For the example of mass lens production at ZEISS we introduce a Markov-based method, that allows us to trace the movement of a given product through the production line to help us understand potential root causes for quality losses and thus being able to predict defects. In the end we aim to achieve a closed loop quality control avoiding quality losses by an improved understanding of root causes and proactive actions enabled by Industry 4.0 technologies such as Machine Connectivity and Artificial Intelligence.

Improving Industrial Testers with Bayesian Modeling at Hahn Automation

Improving Industrial Testers with Bayesian Modeling at Hahn Automation

Summary:

Bayesian modeling allows for applications that are driven not only by data, but also by domain knowledge. With this technique we extended the functionality of an industrial tester at Hahn Automation to uncover the values of parameters that were not available through nondestructive testing. This in turn enables the finding root causes of NOK work-pieces as well as the shortening of test cicle times. The session will explain the basic of Bayesian modeling and walk the audience through the project and its impact for Hahn Automation.

SDG Call for Action to the Data Science World

SDG Call for Action to the Data Science World

Summary:

This table discussion focuses on how data scientists, data nerds, head of data science, and chief data officers can contribute to sustainability in AI. We would like to discuss this from different views like science contribution to SDGs as well as running climate-neutral model developments. The participants should get some understanding on the carbon footprint of doing intensive modeling and what the industry can do to reduce it.

Machine Learning Extending Computer Vision in Healthcare

Machine Learning Extending Computer Vision in Healthcare

Track:

Summary:

Machine learning and artificial intelligence are pushing the boundaries across all the sectors. Health sector is no different. Using ML/AI, computer vision is being revolutionised. This presentation discusses the primary use cases of AI in computer vision for health care, the tool/technologies used, some case studies from the speaker’s work and major challenges faced.

Data Literacy in Big Pharma: What Works, What Doesn’t – Learnings at MSD

Speakers:

Rafael Knuth

Data Literacy in Big Pharma: What Works, What Doesn’t – Learnings at MSD

Track:

Summary:

While pharma companies are increasingly realizing the value of data, they need to realize the importance of data mindset among non-data employees. Without the right mindset neither BI nor AI will be utilized. MSD has implemented a 3-year data literacy program to enable marketing and sales employees to understand customer data & insights and know how to use it. We are in the middle of this journey and will share the approaches we have used and our experience so far.

Machine Learning Assisted Process Optimization as a Service for Health Insurance Companies

Machine Learning Assisted Process Optimization as a Service for Health Insurance Companies

Track:

Summary:

German health insurance companies are obliged to check billings for accuracy. The effort involved is enormous and the use of machine learning promises great optimization potential. The talk presents the experiences made here by SpectrumK, a provider of data services for health insurance companies. It will be discussed that different regimes (e.g. drugs, hospital stays, home health care) come with different requirements for the machine learning technology used and how integration into existing processes can succeed.

The Role of Data and Analytics in the Manufacturing and Distribution of Covid Vaccines

The Role of Data and Analytics in the Manufacturing and Distribution of Covid Vaccines

Track:

Summary:

Over the past two years, ONE LOGIC supported a major Covid vaccine manufacturer and several government agencies in using data to reliably scale the Covid vaccine production and distribute doses to when and where they are needed. We believe this case study can be applied broadly across sensitive supply chains in biotech and pharma, but also on a global scale for issues like the current “chip crisis”.

Predicting Treatment Efficacy in Early Clinical Trial Phases at Roche

Speakers:

Hugo Loureiro

Predicting Treatment Efficacy in Early Clinical Trial Phases at Roche

Summary:

We have created a new method based on the ROPRO (oncology prognostic score). Our method analyses the longitudinal response of patient cohorts to medications. We conducted a retrospective analysis, where we recreated clinical trials from a large real-world dataset and from real in-house clinical trials. Using this new method we detected the treatment benefit earlier than with established methodology. This case study is showing great promise as a clinical development decision support tool.

Live Predictive Analytics for an Urgent Care System at Greater Manchester

Live Predictive Analytics for an Urgent Care System at Greater Manchester

Summary:

The Greater Manchester case study tells a compelling story about the collaboration around data sharing across a health system and utilisation of business intelligence technologies. Multiple health care providers work as a collective urgent care system, sharing system pressures using business intelligence and predictive analytics. The reporting is not only near live but also supports a view of department pressures above and beyond how many people are waiting for treatment. Predictive metrics enable proactive utilisation of directing ambulance flows to support each other as a system.

Investigating the Effects of Therapeutic Antibodies Using an Imageflow and AI Pipeline at Roche

Speakers:

Ali Boushehri

Investigating the Effects of Therapeutic Antibodies Using an Imageflow and AI Pipeline at Roche

Summary:

In our work, we created an AI and high-throughput imaging pipeline, which can help biologist to generate thousands of single cell images and analyze them in a short time. This pipeline assists biologist in designing, understanding the mode of action and predicting the efficacy of different antibodies.

Using Deep Learning to Prevent Deceptive Unicode Phishing Attacks

Using Deep Learning to Prevent Deceptive Unicode Phishing Attacks

Summary:

Unicode has unified characters from world languages, symbols and emoji into a single standard enabling easy interoperable communications. However, in recent years the availability of similar symbols from diverse languages is being exploited to deceive users for malicious ends. Deep learning has given us the tool to prevent these attacks. You will be shown how this exploit works and how to detect and prevent it by training a Deep learning model to detect visual similarities between characters.

Next Generation Data Mesh for Machine Learning

Next Generation Data Mesh for Machine Learning

Track:

Summary:

In recent years, there have been various efforts to product thinking and decentralized data loading using data meshes. However in deep learning, data loading is still challenging to master at scale. In this talk, we present our decentralized data loading solution and show why flexibility and collaboration is key to enable novel ML use cases. We hope to make large-scale model training accessible to a wider community and move towards more sustainable ML.

Extracting Structured Data from Free-form Customer Requests for felmo

Extracting Structured Data from Free-form Customer Requests for felmo

Track:

Summary:

While most people prefer writing free-form text, it is easier to process structured data. This is a dilemma which many companies face as did our customer felmo (a mobile vet service), when asking users to input appointment reasons for vet visits. We were able to leverage a Sentence Bert architecture to help them convert these to structured data, which they can now use to improve their appointment preparation and scheduling. We will share our learnings from applying Sentence Bert to this task.

Handle Deep Learning Projects in the Industry: Continental’s Visual Perception Provider

Handle Deep Learning Projects in the Industry: Continental’s Visual Perception Provider

Track:

Summary:

As a manufacturer, visual inspection is a crucial part of keeping the quality of our products up to high levels. Above that many new applications can be done by making use of deep learning in combination with computer vision. Deep learning models can be industrialized with the Visual Perception Provider (VPP) which is a service developed in-house at Tires. The talk is about the reasoning, why we decided to go that route, and about what we are doing. Also, some use cases done with the Visual Perception Provider will be demonstrated.

Monocular Camera 2.5D Object Detection for Autonomous Systems at Ridecell

Speakers:

Paridhi Singh

Monocular Camera 2.5D Object Detection for Autonomous Systems at Ridecell

Summary:

Object detection with a monocular camera is extremely important for the automotive industry as obtaining LiDAR data is not only expensive but getting them labelled is extremely difficult. Previous works have tried removing dependencies of LiDAR but only for inference, they still needed LiDAR data during training. In our work, there are no requirements of LiDAR data annotations. Yet the major advancement in our work compared to that of the previous works is that previously 3D detections were initially performed by stacking two different deep learning networks i.e., a 2D object detection network followed by projecting them to Bird’s Eye View (BEV) to get the depth from a depth prediction network. The presented approach instead combines the two different deep learning networks in one single feed-forward pass with a common backbone network separating out at heads. Having two different heads with common backbone helps the backpropagation learn the weights by mutually improving the two different tasks of 2D object detection and depth prediction simultaneously, thus giving better and faster output as the previous works.

How to Detect Silent Failures in Machine Learning Model

How to Detect Silent Failures in Machine Learning Model

Summary:

AI algorithms deteriorate and fail silently over time impacting the business’ bottom line. The talk is focused on learning how you should be monitoring machine learning in production. It is a conceptual and informative talk addressed to data scientists & machine learning engineers. We’ll learn about the types of failures, how to detect and address them.

Named Entity Recognition Deployed in Minutes: NERDA and FastAPI to Deploy Transfer Learning Quickly

Named Entity Recognition Deployed in Minutes: NERDA and FastAPI to Deploy Transfer Learning Quickly

Summary:

In this session, two open source libraries will be demonstrated to show you how you can quickly deploy custom Named Entity Recognition (NER) solutions. NERDA is an easy-to-use interface to apply pre-trained transformer models (e.g. based on Huggingface) to your own challenges. Combined with FastAPI, a lightweight webframework for Python, you can build your solution in short time

Deploying Deep Learning Models using Apache TVM

Deploying Deep Learning Models using Apache TVM

Summary:

This session is an introduction into Apache TVM – an end to end compiler framework for deep learning models. It can compile machine learning models from various deep learning frameworks to machine code for different type of hardware targets like CPU, GPU, FPGAs, microcontrollers. It provides bindings for different higher level languages like C++, Rust etc. and also has provision for autotuning the models for different hardware targets. You will learn how to use Apache TVM to deploy your models on different target systems.

Multi-Level Neuroevolution Deep Learning Framework for Multivariate Anomaly Detection

Multi-Level Neuroevolution Deep Learning Framework for Multivariate Anomaly Detection

Summary:

This session presents Anomaly Detection Neuroevolution (AD-NEv) – a multi-level optimized neuroevolution framework. The method adapts genetic algorithms for: i) creating an ensemble model based on the bagging technique; ii) optimizing the topology of single anomaly detection models; iii) non-gradient fine-tuning network parameters. The results prove that the models created by AD-NEv achieve significantly better results than the well-known anomaly detection deep learning models.

Newsletter

Knowledge is everything!
Sign up for our newsletter to receive:

  • 10% off your ticket!
  • insights, interviews, tips, news, and much more about Machine Learning Week Europe
  • price break reminders