Dr. Sheela Siddappa
The sound-based machine learning solution developed for Hindustan Shipping Limited helps identify defects that are sub surface or interior to the part. The identification of defect is real time, during the part production. Thus, enabling one to take actions immediately and not wait to produce a scrap part. Takeaways of this presenation are:(1) insights into the sound based machine learning approach for sub surface and interior defect detection; (2) how to identify the location and magnitude of defect in real time.
Artificial intelligence and machine learning are well known in industry. When we look on production side, however real used solutions are very limited. In this table discussion, we like to discuss reasons for the actual situation. A blocking point for a rollout is that for business people machine learning and AI is hard to understand. On the other side, heuristics often generate similar results like complex machine learning algorithms and are much easier to understand for non data scientists. So, is machine learning just a hip? Also we will discuss another critical questions, because in general there two ways of implementing solutions: make or buy. So what is the best way in what situation?
Can you trust “artificial brains” and thus, “artificial” decisions or recommendations? Well-known deep learning with neuronal networks is normally a black box – thus, people cannot understand why AI decided this way. Therefore, new AI methods are on the rise: deep learning with semantic networks. This explainable AI makes subject matter expert understand and thus, trust AI. In this presentation, case studies are shown from discrete manufacturing (i.e. ZF) and process industry (i.e. Knauf).
Prof. Dr. Sven Crone
International FMCG manufacturer Beiersdorf needs to forecast 1000s of products every month . In 2021, 10 years after the 1st Neural Networks were introduced, Beiersdorf set out to improve automatic forecasting further by reviewing the latest developments in technology. Surprisingly, some of the most recent and hyped algorithms such as DeepLearning, XGBoost, Prophet, BSTS and others did not perform well, but simple AI-methods customised to their data properties improved accuracy significantly.
Telefónica, the biggest Spanish telecommunications company, asked us to provide a machine learning solution capable of detecting when one of their DSLAMs has an anomaly in registered customers indicating a loss in customer IPTV service. You will be shown how to deal with thousands of time series data by combining clustering algorithms, smoothing methods and deep learning tools to obtain efficient and high-performance results.
Human In The Loop (HITL) is a process in which, as part of the Machine Learning (ML) workflow, experts are asked their opinion about the model’s predictions in order to improve it. We’ll discuss how we created a mechanism to automatically predict the best security policies for network DDoS to protect our customers, and explain how we integrated security experts into our ML process, in order to both optimize the labeling of security policies, and move to production quickly with minimum risk.
A malt yield forecast with an excellent prediction performance, as well as first transfers to Augustiner Bräu, were successfully implemented to optimize the beer brewing process. The crucial next step is getting our ready-to-use analysis modules with built-in requirements into the running production. For this, we are creating an architecture for robust deployment, considering model resilience and automated detection of data drift and performance decay to eventually trigger new model training.
By predicting problems the laser machine availability can be increased significantly. This will not only reduce the costs of the maintenance. Started as a pure condition monitoring portal, the project for Trumpf Laser evolved into a hollistic predictive maintenance system, which allows facilitating the work of other departments (e.g. customer support).It also was the starting point for a new service: proactive support. Those practical examples shows the importance of empowering data-driven intelligence for machine manufacturers.
Machines from Kampf Schneid- und Wickeltechnik GmbH & Co. KG are used worldwide for winding and cutting a wide variety of materials. A tear-off of the flow materials during the production process means an expensive loss of production. We present an approach on how an “AI” can detect an imminent tear-off at an early stage. We present (1) the winding machine, process and infrastructure for data collection (2) the ML pipeline for unsupervised pattern recognition and (3) the business case briefly.
Within the research project AKKORD, MIELE, IPS and RapidMiner are working on developing a modular expandable and holistic reporting and analysis system that creates transparency about the present and future quality situation. In field data management, quality analyses are enabled by setting up standardized analysis modules and user-specific dashboards. MIELE designs the system to measure, monitor and forecast various KPIs to holistically improve the business unit’s quality management. The implementation at MIELE in particular shows the direct application in a business area where the use of predictive analytics is effective.
Description: A newspaper subscription is defined by various critical events, ranging from the end of the trial subscription to receiving invoices. Based on predictive analyses that anticipate customer behavior during these events, we develop, test, and implement customized marketing interventions covering the whole subscription lifecycle. You will learn about modeling via a custom AutoML-pipeline and its close intertwining with marketing execution that aim to maximize subscription lifetime value at DIE ZEIT.
Assembling a car is a complex task consisting of many steps usually grouped and organized in work plans. Based on the car model and its specifications, creating a key performance indicator (KPI) optimized work plan can be very time consuming. This case study at Volkswagen shows how data science can assist and speed up this process. After using various text analytics methods to identify similar work plans descriptions, a semi-automated benchmarking approach provides a KPI-driven recommendation.
New streams of earth observation (EO) data (e.g. from Copernicus and New Space missions) lead to a far more comprehensive picture of our planet. These new global data on our planet offer new possibilities for scientists to advance our understanding of the Earth System. It also represents new opportunities for entrepreneurs to turn big data into new types of information services. In this table discussion, we discuss the chances and challenges as well as use cases and resources (tools, methods, data sets) for AI4EO.
Building a reliable single source of truth in a company typically comes with the following dilemma: focusing the responsibility around a dedicated data engineering team (data lake) versus distributing the ownership among the product engineering teams (data mesh). This talk will focus on lessons learned from implementing a shared responsibility model for the trusted flow of information within a data platform.
Quality improvement for mass production lines is an ongoing topic for many years. The target is to reduce the rate of defects and scrap during production having an impact on sustainability, delivery time and cost. For the example of mass lens production at ZEISS we introduce a Markov-based method, that allows us to trace the movement of a given product through the production line to help us understand potential root causes for quality losses and thus being able to predict defects. In the end we aim to achieve a closed loop quality control avoiding quality losses by an improved understanding of root causes and proactive actions enabled by Industry 4.0 technologies such as Machine Connectivity and Artificial Intelligence.
Dr. Maksim Greiner
Bayesian modeling allows for applications that are driven not only by data, but also by domain knowledge. With this technique we extended the functionality of an industrial tester at Hahn Automation to uncover the values of parameters that were not available through nondestructive testing. This in turn enables the finding root causes of NOK work-pieces as well as the shortening of test cicle times. The session will explain the basic of Bayesian modeling and walk the audience through the project and its impact for Hahn Automation.
Dr. Nina Meinel
Dr. Sandra Romeis
This table discussion focuses on how data scientists, data nerds, head of data science, and chief data officers can contribute to sustainability in AI. We would like to discuss this from different views like science contribution to SDGs as well as running climate-neutral model developments. The participants should get some understanding on the carbon footprint of doing intensive modeling and what the industry can do to reduce it.
Machine learning and artificial intelligence are pushing the boundaries across all the sectors. Health sector is no different. Using ML/AI, computer vision is being revolutionised. This presentation discusses the primary use cases of AI in computer vision for health care, the tool/technologies used, some case studies from the speaker’s work and major challenges faced.
While pharma companies are increasingly realizing the value of data, they need to realize the importance of data mindset among non-data employees. Without the right mindset neither BI nor AI will be utilized. MSD has implemented a 3-year data literacy program to enable marketing and sales employees to understand customer data & insights and know how to use it. We are in the middle of this journey and will share the approaches we have used and our experience so far.
Dr. Steffen Wagner
German health insurance companies are obliged to check billings for accuracy. The effort involved is enormous and the use of machine learning promises great optimization potential. The talk presents the experiences made here by SpectrumK, a provider of data services for health insurance companies. It will be discussed that different regimes (e.g. drugs, hospital stays, home health care) come with different requirements for the machine learning technology used and how integration into existing processes can succeed.
Dr. Sebastian Wernicke
Over the past two years, ONE LOGIC supported a major Covid vaccine manufacturer and several government agencies in using data to reliably scale the Covid vaccine production and distribute doses to when and where they are needed. We believe this case study can be applied broadly across sensitive supply chains in biotech and pharma, but also on a global scale for issues like the current “chip crisis”.
We have created a new method based on the ROPRO (oncology prognostic score). Our method analyses the longitudinal response of patient cohorts to medications. We conducted a retrospective analysis, where we recreated clinical trials from a large real-world dataset and from real in-house clinical trials. Using this new method we detected the treatment benefit earlier than with established methodology. This case study is showing great promise as a clinical development decision support tool.
The Greater Manchester case study tells a compelling story about the collaboration around data sharing across a health system and utilisation of business intelligence technologies. Multiple health care providers work as a collective urgent care system, sharing system pressures using business intelligence and predictive analytics. The reporting is not only near live but also supports a view of department pressures above and beyond how many people are waiting for treatment. Predictive metrics enable proactive utilisation of directing ambulance flows to support each other as a system.
In our work, we created an AI and high-throughput imaging pipeline, which can help biologist to generate thousands of single cell images and analyze them in a short time. This pipeline assists biologist in designing, understanding the mode of action and predicting the efficacy of different antibodies.
Dimas Muñoz Montesinos
Unicode has unified characters from world languages, symbols and emoji into a single standard enabling easy interoperable communications. However, in recent years the availability of similar symbols from diverse languages is being exploited to deceive users for malicious ends. Deep learning has given us the tool to prevent these attacks. You will be shown how this exploit works and how to detect and prevent it by training a Deep learning model to detect visual similarities between characters.
Dr. Thomas Wollmann
In recent years, there have been various efforts to product thinking and decentralized data loading using data meshes. However in deep learning, data loading is still challenging to master at scale. In this talk, we present our decentralized data loading solution and show why flexibility and collaboration is key to enable novel ML use cases. We hope to make large-scale model training accessible to a wider community and move towards more sustainable ML.
While most people prefer writing free-form text, it is easier to process structured data. This is a dilemma which many companies face as did our customer felmo (a mobile vet service), when asking users to input appointment reasons for vet visits. We were able to leverage a Sentence Bert architecture to help them convert these to structured data, which they can now use to improve their appointment preparation and scheduling. We will share our learnings from applying Sentence Bert to this task.
Joana Raquel Silva
As a manufacturer, visual inspection is a crucial part of keeping the quality of our products up to high levels. Above that many new applications can be done by making use of deep learning in combination with computer vision. Deep learning models can be industrialized with the Visual Perception Provider (VPP) which is a service developed in-house at Tires. The talk is about the reasoning, why we decided to go that route, and about what we are doing. Also, some use cases done with the Visual Perception Provider will be demonstrated.
Object detection with a monocular camera is extremely important for the automotive industry as obtaining LiDAR data is not only expensive but getting them labelled is extremely difficult. Previous works have tried removing dependencies of LiDAR but only for inference, they still needed LiDAR data during training. In our work, there are no requirements of LiDAR data annotations. Yet the major advancement in our work compared to that of the previous works is that previously 3D detections were initially performed by stacking two different deep learning networks i.e., a 2D object detection network followed by projecting them to Bird’s Eye View (BEV) to get the depth from a depth prediction network. The presented approach instead combines the two different deep learning networks in one single feed-forward pass with a common backbone network separating out at heads. Having two different heads with common backbone helps the backpropagation learn the weights by mutually improving the two different tasks of 2D object detection and depth prediction simultaneously, thus giving better and faster output as the previous works.
AI algorithms deteriorate and fail silently over time impacting the business’ bottom line. The talk is focused on learning how you should be monitoring machine learning in production. It is a conceptual and informative talk addressed to data scientists & machine learning engineers. We’ll learn about the types of failures, how to detect and address them.
In this session, two open source libraries will be demonstrated to show you how you can quickly deploy custom Named Entity Recognition (NER) solutions. NERDA is an easy-to-use interface to apply pre-trained transformer models (e.g. based on Huggingface) to your own challenges. Combined with FastAPI, a lightweight webframework for Python, you can build your solution in short time
Abhilash Babu Jyotheendra Babu
This session is an introduction into Apache TVM – an end to end compiler framework for deep learning models. It can compile machine learning models from various deep learning frameworks to machine code for different type of hardware targets like CPU, GPU, FPGAs, microcontrollers. It provides bindings for different higher level languages like C++, Rust etc. and also has provision for autotuning the models for different hardware targets. You will learn how to use Apache TVM to deploy your models on different target systems.
This session presents Anomaly Detection Neuroevolution (AD-NEv) – a multi-level optimized neuroevolution framework. The method adapts genetic algorithms for: i) creating an ensemble model based on the bagging technique; ii) optimizing the topology of single anomaly detection models; iii) non-gradient fine-tuning network parameters. The results prove that the models created by AD-NEv achieve significantly better results than the well-known anomaly detection deep learning models.
Knowledge is everything!
Sign up for our newsletter to receive:
Yes, I would like to subscribe to the Machine Learning Week Europe Newsletter.