Confirmed Sessions
Machine Learning Week Europe
14-18 June, 2021

Deep Learning World – MONDAY

PAW Climate – MONDAY

PAW Industry 4.0 – TUESDAY

PAW Finance – WEDNESDAY

PAW Business – THURSDAY

PAW Healthcare -FRIDAY

Sub-surface Defects Detection During Manufacturing Through Sound-based Machine Learning Approach at Hindustan Shipping Limited

Sub-surface Defects Detection During Manufacturing Through Sound-based Machine Learning Approach at Hindustan Shipping Limited

Summary:

The sound-based machine learning solution developed for Hindustan Shipping Limited helps identify defects that are sub surface or interior to the part. The identification of defect is real time, during the part production. Thus, enabling one to take actions immediately and not wait to produce a scrap part. Takeaways of this presenation are:(1) insights into the sound based machine learning approach for sub surface and interior defect detection; (2) how to identify the location and magnitude of defect in real time.

Machine Learning at Production in the Real World: Chances & Limits, Challenges & Solutions

Speakers:

Walter Huber

Machine Learning at Production in the Real World: Chances & Limits, Challenges & Solutions

Summary:

Artificial intelligence and machine learning are well known in industry. When we look on production side, however real used solutions are very limited. In this table discussion, we like to discuss reasons for the actual situation. A blocking point for a rollout is that for business people machine learning and AI is hard to understand. On the other side, heuristics often generate similar results like complex machine learning algorithms and are much easier to understand for non data scientists. So, is machine learning just a hip? Also we will discuss another critical questions, because in general there two ways of implementing solutions: make or buy. So what is the best way in what situation?

Developing the 2nd Generation of AIML Models for Demand Planning at Beiersdorf AG

Developing the 2nd Generation of AIML Models for Demand Planning at Beiersdorf AG

Track:

Summary:

International FMCG manufacturer Beiersdorf needs to forecast 1000s of products every month . In 2021, 10 years after the 1st Neural Networks were introduced, Beiersdorf set out to improve automatic forecasting further by reviewing the latest developments in technology. Surprisingly, some of the most recent and hyped algorithms such as DeepLearning, XGBoost, Prophet, BSTS and others did not perform well, but simple AI-methods customised to their data properties improved accuracy significantly.

Machine Learning Techniques to Preempt IPTV Service Downtime with Time Series Anomaly Detection on DSLAM Systems at Telefonica

Machine Learning Techniques to Preempt IPTV Service Downtime with Time Series Anomaly Detection on DSLAM Systems at Telefonica

Track:

Summary:

Telefónica, the biggest Spanish telecommunications company, asked us to provide a machine learning solution capable of detecting when one of their DSLAMs has an anomaly in registered customers indicating a loss in customer IPTV service. You will be shown how to deal with thousands of time series data by combining clustering algorithms, smoothing methods and deep learning tools to obtain efficient and high-performance results.

Taking Data-Driven Process Optimization to the Next Level at Bitburger

Taking Data-Driven Process Optimization to the Next Level at Bitburger

Track:

Summary:

A malt yield forecast with an excellent prediction performance, as well as first transfers to Augustiner Bräu, were successfully implemented to optimize the beer brewing process. The crucial next step is getting our ready-to-use analysis modules with built-in requirements into the running production. For this, we are creating an architecture for robust deployment, considering model resilience and automated detection of data drift and performance decay to eventually trigger new model training.

Implementing a Predictive Maintenance System for Trumpf Laser

Speakers:

Oliver Bracht

Implementing a Predictive Maintenance System for Trumpf Laser

Track:

Summary:

By predicting problems the laser machine availability can be increased significantly. This will not only reduce the costs of the maintenance. Started as a pure condition monitoring portal, the project for Trumpf Laser evolved into a hollistic predictive maintenance system, which allows facilitating the work of other departments (e.g. customer support).It also was the starting point for a new service: proactive support. Those practical examples shows the importance of empowering data-driven intelligence for machine manufacturers.

Use of PLC Data for Early Detection of a Serious Production Failure at Kampf

Use of PLC Data for Early Detection of a Serious Production Failure at Kampf

Track:

Summary:

Machines from Kampf Schneid- und Wickeltechnik GmbH & Co. KG are used worldwide for winding and cutting a wide variety of materials. A tear-off of the flow materials during the production process means an expensive loss of production. We present an approach on how an “AI” can detect an imminent tear-off at an early stage. We present (1) the winding machine, process and infrastructure for data collection (2) the ML pipeline for unsupervised pattern recognition and (3) the business case briefly.

Data-driven, Networked Quality Management in the Business Unit Laundry at MIELE

Data-driven, Networked Quality Management in the Business Unit Laundry at MIELE

Track:

Summary:

Within the research project AKKORD, MIELE, IPS and RapidMiner are working on developing a modular expandable and holistic reporting and analysis system that creates transparency about the present and future quality situation. In field data management, quality analyses are enabled by setting up standardized analysis modules and user-specific dashboards. MIELE designs the system to measure, monitor and forecast various KPIs to holistically improve the business unit’s quality management. The implementation at MIELE in particular shows the direct application in a business area where the use of predictive analytics is effective.

Predictive Subscription Lifecycle Marketing at DIE ZEIT

Predictive Subscription Lifecycle Marketing at DIE ZEIT

Summary:

Description: A newspaper subscription is defined by various critical events, ranging from the end of the trial subscription to receiving invoices. Based on predictive analyses that anticipate customer behavior during these events, we develop, test, and implement customized marketing interventions covering the whole subscription lifecycle. You will learn about modeling via a custom AutoML-pipeline and its close intertwining with marketing execution that aim to maximize subscription lifetime value at DIE ZEIT.

How Data Science Assists Volkswagen in Benchmarking and Identifying Similar Work Plan Descriptions

How Data Science Assists Volkswagen in Benchmarking and Identifying Similar Work Plan Descriptions

Track:

Summary:

Assembling a car is a complex task consisting of many steps usually grouped and organized in work plans. Based on the car model and its specifications, creating a key performance indicator (KPI) optimized work plan can be very time consuming. This case study at Volkswagen shows how data science can assist and speed up this process. After using various text analytics methods to identify similar work plans descriptions, a semi-automated benchmarking approach provides a KPI-driven recommendation.

Building Trust as a Service: A Shared Responsibility Approach for Data Platforms at CentralNic Group

Speakers:

Mirco Pyrtek

Building Trust as a Service: A Shared Responsibility Approach for Data Platforms at CentralNic Group

Summary:

Building a reliable single source of truth in a company typically comes with the following dilemma: focusing the responsibility around a dedicated data engineering team (data lake) versus distributing the ownership among the product engineering teams (data mesh). This talk will focus on lessons learned from implementing a shared responsibility model for the trusted flow of information within a data platform.

Markov-based Predictive Quality Analytics for Mass Lens Production at ZEISS

Markov-based Predictive Quality Analytics for Mass Lens Production at ZEISS

Summary:

Quality improvement for mass production lines is an ongoing topic for many years. The target is to reduce the rate of defects and scrap during production having an impact on sustainability, delivery time and cost. For the example of mass lens production at ZEISS we introduce a Markov-based method, that allows us to trace the movement of a given product through the production line to help us understand potential root causes for quality losses and thus being able to predict defects. In the end we aim to achieve a closed loop quality control avoiding quality losses by an improved understanding of root causes and proactive actions enabled by Industry 4.0 technologies such as Machine Connectivity and Artificial Intelligence.

Improving Industrial Testers with Bayesian Modeling at Hahn Automation

Improving Industrial Testers with Bayesian Modeling at Hahn Automation

Summary:

Bayesian modeling allows for applications that are driven not only by data, but also by domain knowledge. With this technique we extended the functionality of an industrial tester at Hahn Automation to uncover the values of parameters that were not available through nondestructive testing. This in turn enables the finding root causes of NOK work-pieces as well as the shortening of test cicle times. The session will explain the basic of Bayesian modeling and walk the audience through the project and its impact for Hahn Automation.

SDG Call for Action to the Data Science World

SDG Call for Action to the Data Science World

Summary:

This table discussion focuses on how data scientists, data nerds, head of data science, and chief data officers can contribute to sustainability in AI. We would like to discuss this from different views like science contribution to SDGs as well as running climate-neutral model developments. The participants should get some understanding on the carbon footprint of doing intensive modeling and what the industry can do to reduce it.

Data Literacy in Big Pharma: What Works, What Doesn’t – Learnings at MSD

Speakers:

Rafael Knuth

Data Literacy in Big Pharma: What Works, What Doesn’t – Learnings at MSD

Track:

Summary:

While pharma companies are increasingly realizing the value of data, they need to realize the importance of data mindset among non-data employees. Without the right mindset neither BI nor AI will be utilized. MSD has implemented a 3-year data literacy program to enable marketing and sales employees to understand customer data & insights and know how to use it. We are in the middle of this journey and will share the approaches we have used and our experience so far.

Machine Learning Assisted Process Optimization as a Service for Health Insurance Companies

Machine Learning Assisted Process Optimization as a Service for Health Insurance Companies

Track:

Summary:

German health insurance companies are obliged to check billings for accuracy. The effort involved is enormous and the use of machine learning promises great optimization potential. The talk presents the experiences made here by SpectrumK, a provider of data services for health insurance companies. It will be discussed that different regimes (e.g. drugs, hospital stays, home health care) come with different requirements for the machine learning technology used and how integration into existing processes can succeed.

The Role of Data and Analytics in the Manufacturing and Distribution of Covid Vaccines

The Role of Data and Analytics in the Manufacturing and Distribution of Covid Vaccines

Track:

Summary:

Over the past two years, ONE LOGIC supported a major Covid vaccine manufacturer and several government agencies in using data to reliably scale the Covid vaccine production and distribute doses to when and where they are needed. We believe this case study can be applied broadly across sensitive supply chains in biotech and pharma, but also on a global scale for issues like the current “chip crisis”.

Predicting Treatment Efficacy in Early Clinical Trial Phases at Roche

Speakers:

Hugo Loureiro

Predicting Treatment Efficacy in Early Clinical Trial Phases at Roche

Summary:

We have created a new method based on the ROPRO (oncology prognostic score). Our method analyses the longitudinal response of patient cohorts to medications. We conducted a retrospective analysis, where we recreated clinical trials from a large real-world dataset and from real in-house clinical trials. Using this new method we detected the treatment benefit earlier than with established methodology. This case study is showing great promise as a clinical development decision support tool.

Live Predictive Analytics for an Urgent Care System at Greater Manchester

Live Predictive Analytics for an Urgent Care System at Greater Manchester

Summary:

The Greater Manchester case study tells a compelling story about the collaboration around data sharing across a health system and utilisation of business intelligence technologies. Multiple health care providers work as a collective urgent care system, sharing system pressures using business intelligence and predictive analytics. The reporting is not only near live but also supports a view of department pressures above and beyond how many people are waiting for treatment. Predictive metrics enable proactive utilisation of directing ambulance flows to support each other as a system.

Investigating the Effects of Therapeutic Antibodies Using an Imageflow and AI Pipeline at Roche

Speakers:

Ali Boushehri

Investigating the Effects of Therapeutic Antibodies Using an Imageflow and AI Pipeline at Roche

Summary:

In our work, we created an AI and high-throughput imaging pipeline, which can help biologist to generate thousands of single cell images and analyze them in a short time. This pipeline assists biologist in designing, understanding the mode of action and predicting the efficacy of different antibodies.

Using Deep Learning to Prevent Deceptive Unicode Phishing Attacks

Using Deep Learning to Prevent Deceptive Unicode Phishing Attacks

Summary:

Unicode has unified characters from world languages, symbols and emoji into a single standard enabling easy interoperable communications. However, in recent years the availability of similar symbols from diverse languages is being exploited to deceive users for malicious ends. Deep learning has given us the tool to prevent these attacks. You will be shown how this exploit works and how to detect and prevent it by training a Deep learning model to detect visual similarities between characters.

Next Generation Data Mesh for Machine Learning

Next Generation Data Mesh for Machine Learning

Track:

Summary:

In recent years, there have been various efforts to product thinking and decentralized data loading using data meshes. However in deep learning, data loading is still challenging to master at scale. In this talk, we present our decentralized data loading solution and show why flexibility and collaboration is key to enable novel ML use cases. We hope to make large-scale model training accessible to a wider community and move towards more sustainable ML.

Extracting Structured Data from Free-form Customer Requests for felmo

Extracting Structured Data from Free-form Customer Requests for felmo

Track:

Summary:

While most people prefer writing free-form text, it is easier to process structured data. This is a dilemma which many companies face as did our customer felmo (a mobile vet service), when asking users to input appointment reasons for vet visits. We were able to leverage a Sentence Bert architecture to help them convert these to structured data, which they can now use to improve their appointment preparation and scheduling. We will share our learnings from applying Sentence Bert to this task.

Handle Deep Learning Projects in the Industry: Continental’s Visual Perception Provider

Handle Deep Learning Projects in the Industry: Continental’s Visual Perception Provider

Track:

Summary:

As a manufacturer, visual inspection is a crucial part of keeping the quality of our products up to high levels. Above that many new applications can be done by making use of deep learning in combination with computer vision. Deep learning models can be industrialized with the Visual Perception Provider (VPP) which is a service developed in-house at Continental Tires. The talk is about the reasoning, why we decided to go that route, and about what we are doing. Also, some use cases done with the Visual Perception Provider will be demonstrated.

How to Detect Silent Failures in Machine Learning Model

How to Detect Silent Failures in Machine Learning Model

Summary:

AI algorithms deteriorate and fail silently over time impacting the business’ bottom line. The talk is focused on learning how you should be monitoring machine learning in production. It is a conceptual and informative talk addressed to data scientists & machine learning engineers. We’ll learn about the types of failures, how to detect and address them.

Named Entity Recognition Deployed in Minutes: NERDA and FastAPI to Deploy Transfer Learning Quickly

Named Entity Recognition Deployed in Minutes: NERDA and FastAPI to Deploy Transfer Learning Quickly

Summary:

In this session, two open source libraries will be demonstrated to show you how you can quickly deploy custom Named Entity Recognition (NER) solutions. NERDA is an easy-to-use interface to apply pre-trained transformer models (e.g. based on Huggingface) to your own challenges. Combined with FastAPI, a lightweight webframework for Python, you can build your solution in short time

Deploying Deep Learning Models using Apache TVM

Deploying Deep Learning Models using Apache TVM

Summary:

This session is an introduction into Apache TVM – an end to end compiler framework for deep learning models. It can compile machine learning models from various deep learning frameworks to machine code for different type of hardware targets like CPU, GPU, FPGAs, microcontrollers. It provides bindings for different higher level languages like C++, Rust etc. and also has provision for autotuning the models for different hardware targets. You will learn how to use Apache TVM to deploy your models on different target systems.

Multi-Level Neuroevolution Deep Learning Framework for Multivariate Anomaly Detection

Multi-Level Neuroevolution Deep Learning Framework for Multivariate Anomaly Detection

Summary:

This session presents Anomaly Detection Neuroevolution (AD-NEv) – a multi-level optimized neuroevolution framework. The method adapts genetic algorithms for: i) creating an ensemble model based on the bagging technique; ii) optimizing the topology of single anomaly detection models; iii) non-gradient fine-tuning network parameters. The results prove that the models created by AD-NEv achieve significantly better results than the well-known anomaly detection deep learning models.

Survival Regression for Cost-Optimal Maintenance of Wearing-Parts under Various Operating Conditions

Survival Regression for Cost-Optimal Maintenance of Wearing-Parts under Various Operating Conditions

Summary:

Estimating lifetime of a machine or a wearing-part in a complex machine is a requirement in cost-optimal maintenance planning to reduce costly downtime or avoid too frequent maintenance. Regression models mapping features to the time-to-failure are not suited in the real world because of censored data. This talk shows how survival regressions can be utilized for a maintenance planning application. We discuss and demonstrate available survival analysis tools, their strengths and limitations.

Towards Predictive Maintenance: A Big-Data Platform for the RATIONAL AG

Speakers:

Robert Pesch

Towards Predictive Maintenance: A Big-Data Platform for the RATIONAL AG

Summary:

Processing, storing, and analyzing high-dimensional IoT data from thousands of field devices is challenging, but enables new business opportunities for classical hardware manufacturers. We developed a big data platform for the RATIONAL AG to process IoT data from large commercial cooking appliances. We present challenges, pitfalls, and the system architecture of our implemented solution, which serves reporting and condition monitoring for thousands of customers.

Thinking Industrial Human-centered AI End-to-end: From Imputations to Psychology for Training Data

Thinking Industrial Human-centered AI End-to-end: From Imputations to Psychology for Training Data

Summary:

After quick successful POCs, the productive rollout of AI solutions often comes with unexpected challenges, especially for human-in-the-loop applications. The presentation will illustrate a holistic solution in three sections, starting with an end-to-end overview with the example of ML-based assistance systems, followed by deep-dives into the two main pain points: Imputation approaches for dealing with imperfect data and the psychology behind motivators for human interaction with such systems.

Transforming The Retail Industry with Transformers

Transforming The Retail Industry with Transformers

Summary:

Despite the astounding results we have seen from transformers and language models in academia, deploying these models in industry is still very challenging because: 1) maintaining these models is very time and cost consuming 2) there is not yet enough clarity about when the advantages and challenges of these models outweigh classical ML models. In this talk, we discuss these challenges and share our findings and recommendations from working on real world examples at SPINS.

Becoming a Pokémon Master with DVC: Experiment Pipelines for Deep Learning Projects

Speakers:

Rob de Wit

Becoming a Pokémon Master with DVC: Experiment Pipelines for Deep Learning Projects

Summary:

In my quest to become a Pokémon master, I need to learn a lot about their types. I’d rather create a model to do so for me. A simple one-off won’t do: to be ready for upcoming generations, I need a pipeline for experimenting with new datasets and configs. We will set up a codified ML pipeline using the open-source DVC library. This will help us adopt a reproducible, experiment-driven approach to ML, which will boost our ability to iterate over models and compare them to find the best one.

Find-Next-Job: AI System for Recommending Job Transitions Across Industries

Find-Next-Job: AI System for Recommending Job Transitions Across Industries

Summary:

Despite the wealth of information available to job seekers, choosing one’s next job remains a complex endeavor. We will present an approach for creating job recommendations across industries by combining hand-annotated job data and a multivariate technique combining job title, job description, and skill set similarity. This method was used to scan 9 million job transitions to identify the best matches for any given job and uncover deep insights into how the labor market is organized today.

Boost your Customer Understanding Using Survival Analysis

Boost your Customer Understanding Using Survival Analysis

Summary:

Survival analysis, the modeling of time-to-event data, is a statistical field with a long history and great potential to marketing and analytics. In this deep dive, you will learn about the brief origins of survival analysis and applications in the field of customer retention, which is of particular importance for subscription-based growth. Learn how to grow your understanding of customer churn, and learn how to better predict your customer’s lifetime, along with monetary aspects altogether.

Attribution at Springer Nature: Understanding the Journey of Journal Submissions

Attribution at Springer Nature: Understanding the Journey of Journal Submissions

Summary:

For a publisher it is key to understand the customer journey of a submission process i.e., which online touchpoints and marketing channels drive submission. An attribution model has been evaluated using different machine learning approaches and is introduced into the marketing organisation of Springer Nature. The combination of describing elements with a predictive component and simulation approach helped us the understand the journey, prioritize and quantify the value of marketing activities.

Six Business Skills Critical for Data Scientists

Six Business Skills Critical for Data Scientists

Summary:

This talk will introduce the foundational business skills you’ll need to deliver business value and grow your career as an analyst. Drawing on best practices, published research, case studies and personal anecdotes from two decades of industry experience, we give an overview of foundational skills related to Company, Colleagues, Storytelling, Expectations, Results and Careers–emphasizing how each topic relates to your unique position as an analytics professional within a larger corporation.

How Data Literacy Drives Innovation – The Digital Academy Approach at EWE

Speakers:

Hauke Thaden

How Data Literacy Drives Innovation – The Digital Academy Approach at EWE

Summary:

Since 2017, Hauke is a Data Scientist at the innovation department of EWE AG in Oldenburg. Before that, he studied mathematics and worked as a researcher in the field of applied statistics. At EWE, Hauke is responsible for turning data into business value through data science projects in diverse application areas of the energy sector. Additionally, he focusses on spreading data literacy and data culture to enable colleagues to work with data and generate new ideas for data-driven solutions.

Leveraging Zero-trust Architecture Principles to Achieve World-class Enterprise Data Governance

Speakers:

Anna Kramer

Leveraging Zero-trust Architecture Principles to Achieve World-class Enterprise Data Governance

Summary:

Global enterprises are increasingly relying on data analytics for decision making. To process data, firms leverage cloud-based data warehouses. As more on-prem data is moved to the cloud, the need for robust data governance controls to ensure data integrity, security, and regulatory adherence is mounting; however, existing governance processes are lagging. Here we present a zero-trust approach that can augment existing governance models and reduce exposure of sensitive data like PII.

Real-time Fraud Detection: Challenges and Solutions

Speakers:

Fawaz Ghali

Real-time Fraud Detection: Challenges and Solutions

Summary:

Fraud can be considerably reduced via speed, scalability, and stability. Investigating fraudulent activities, using fraud detection machine learning is crucial where decisions need to be made in microseconds, not seconds or even milliseconds. This becomes more challenging when things get demanding and scaling real-time fraud detection becomes a bottleneck. The talk will address these issues and provide solutions using the Hazelcast Open Source platform.

Authentication Vulnerability Detection on Tabular Data in Black Box Setting

Speakers:

Debasmita Das

Authentication Vulnerability Detection on Tabular Data in Black Box Setting

Summary:

Adversarial archetypes exploit workings of any system to disrupt robustness and decision-making of the underlying algorithms. This deep dive presents AuthSHAP, a model agnostic and robust implementation of SHAP to uncover the extent to which key features are not appropriated by any model in the decision making. This ‘knowledge’ is significant information to a fraudster to design intelligent or adversarial attacks. The presentation shows that even in black-box settings, it is possible to understand the vulnerability.

How Eye Contact with a Robo-Advisor Shapes Investment Decisions

How Eye Contact with a Robo-Advisor Shapes Investment Decisions

Summary:

Making eye contact is one of the most powerful ways to build relationships—whether it’s a new date or a potential business partner. But is it also true for robo-advisors which give consumers investment advice? Today many consumers do not trust robo-advisors, which are mostly text-based interfaces. In this talk, we present a new robo-advisor prototype in the form of a virtual social robot that makes eye contact and show how it impacts consumer trust and investment decisions in online experiments.

Leveraging NLP to Understand Reader Preferences for Neue Osnabrücker Zeitung (NOZ)

Leveraging NLP to Understand Reader Preferences for Neue Osnabrücker Zeitung (NOZ)

Summary:

Being able to predict KPIs for new, not yet published articles is a key factor in process optimization to assist editors with their daily work flow. Modern NLP methods allow us to use text content efficiently and to understand the connection between natural language and the respective KPIs. By combining statistical models (GAM), modern NLP methods (BERT), and XAI tools (SHAP), we are able to understand specific connections between text content and our KPI.

How to Make the Opposite Not Attract? On a Date with the Similarity Learning

How to Make the Opposite Not Attract? On a Date with the Similarity Learning

Summary:

Classification is one of the most frequently solved problems using machine learning. Unfortunately, it cannot handle a case with a number of classes, varying over time, and require all the data to be labelled. There is another approach, designed to solve cases when we can’t perform full data annotation and/or would like to dynamically modify the number of classes. Similarity learning is capable of solving such problems even with extreme classification. We’re going to show how to use such models in production.

Safety-Critical Autonomous Vehicles: Is the Neural Network Aware of the Unknown?

Safety-Critical Autonomous Vehicles: Is the Neural Network Aware of the Unknown?

Summary:

Many safety-critical systems, such as autonomous vehicles, rely on neural networks as state-of-the-art for image classification. Despite high accuracy, their final classification decision is difficult to verify. Uncertainty on how neural networks will behave is a challenge to safety. A known issue is providing high probabilities for unknown images. This deep dive will present some novel solutions to inspect certain parameters of trained neural network for detecting out-of-distribution data.

Efficient Data-Driven Marketing: Machine Learning at Major Telecom and Banking Companies

Efficient Data-Driven Marketing: Machine Learning at Major Telecom and Banking Companies

Summary:

Proper utilization of Machine Learning and Predictive Modeling allow companies to increase profit, gain competitive advantages, grow, and win market completion. This is illustrated by results achieved across Major Telecom and Banking Companies. Efficient Proactive Retention, Revenue Stimulation, and Second-best Offer approaches require powerful Churn and Propensity models. There is a big number of techniques that can be applied to any modelling task, but it is almost impossible to know at the outset which will be most effective. Also, there are many ways to interconnect IT and Business. One way to address these topics is through advanced analytical base tables that proved to be highly efficient in all the illustrated cases.

AI Optimization of the Markdown Process: How Benetton Raised Revenues with a Prescriptive Approach

AI Optimization of the Markdown Process: How Benetton Raised Revenues with a Prescriptive Approach

Summary:

Markdowns allow retailers to get rid of dead inventory and ensure turnover. Traditionally, even AI tools use a rule-based approach that constrains optimization and hurts revenues. Fashion retailer Benetton partnered with Evo to explore a new prescriptive approach to optimize markdown performance through complex process mapping. The resulting algorithm relied on a formal AI forecasting model feeding a price optimization model—a clear improvement that ultimately increased revenues by more than 5%

Big Data Analytics of Customer Preferences with NLP at PAYBACK

Big Data Analytics of Customer Preferences with NLP at PAYBACK

Summary:

Analysis of customer preferences and behavior is crucial for running successful marketing campaigns and providing valuable insights into the strategic planning of CRM at PAYBACK. Multiple Deep Learning tools have been developed to study customer experience through analysis of a large amount of data presented as an interaction history with business products. Here, we discuss a product-focused approach for the analysis of large amounts of customer-associated textual data using industrial NLP tools.

Practical Framework for Secure AI Development Lifecycle

Practical Framework for Secure AI Development Lifecycle

Summary:

AI systems are fundamentially vulnerable – ignoring AI security risks will jeopardize safety of people and security of companies. This session presents a framework for secure AI development covering entire model lifecycle. It’s relevant for AI stakeholders across functions and levels in AI development, governance, and product security teams. The stages and activities described in the framework should enable AI stakeholders to implement a secure development lifecycle for their AI systems.

Responsible AI Starts with Responsible Design

Responsible AI Starts with Responsible Design

Summary:

The desire to embody Responsible AI practices requires an understanding of, the context around, and the impact on the end user. To achieve this, design and research are just as pivotal to the RAI conversation as ML. There is no bigger risk, and no greater irresponsibility, than to not interface with those who will be affected by your design. This talk will share how to navigate customer relationships to encourage end user contact and mitigate assumptions and therefore risk.

The Data-2-Value Transformation

The Data-2-Value Transformation

Summary:

Transforming data into value is a strategic focus for companies across industries. With the gap between leading data driven businesses and late movers growing rapidly, decision makers are wondering what secret ingredient helps companies to successfully leapfrog roadblocks on their value journey. Since this is evidently not a one-dimensional challenge, we’ll de-mystify some unavoidable buzzwords and take a hands-on perspective on what really helped companies to sustainably turn data into value.

Building Pricing Agents Starting from Scratch

Speakers:

Taras Firman

Building Pricing Agents Starting from Scratch

Summary:

 As far as people moved to sell and buy online because of covid-19, the importance of making right pricing decisions increased dramatically. Market is extremely competitive, supply chain is very complex. That’s why being top seller is much more complicated than it was before. This deep dive session will show how to build pricing agents that will react on different changes in a market and will keep your products on top of sellers.

Dealing With the New Artificial Intelligence Act: How to Build Compliant and Risk-proof AI

Speakers:

Ayush Patel

Dealing With the New Artificial Intelligence Act: How to Build Compliant and Risk-proof AI

Summary:

During this session, we will discuss the different risk-based categories of AI laid out by the EU’s Artificial Intelligence Act and find out how to become more admissible as per the Act. Thereafter, we will walk through the concrete steps, tools, and practices such as monitoring, explainability, model fairness, and compliance that are instrumental in achieving Responsible AI and building more risk-proof and market-friendly solutions.

An Approach to Optimize Personalized Treatments in CRM-Campaigns at PAYBACK

An Approach to Optimize Personalized Treatments in CRM-Campaigns at PAYBACK

Summary:

Evaluation and optimization of the effect of personalized treatments plays an important role in marketing campaigns and CRM strategies at PAYBACK. However, optimization goals are highly variable due to the wide range of applications and constraints imposed by the industry. Here we compare the use of optimization tools to overcome problems encountered in various business settings, such as cost/profit optimization with restrictions enforced by the CRM objectives.

A Simple Approach to Simultaneously Optimize Models and Business at EnBW

A Simple Approach to Simultaneously Optimize Models and Business at EnBW

Summary:

To evaluate the goodness of models we have a whole lot of KPI with a set of underlying ideas of “what goodness is”. However, the impact on the real-world (when the model is applied there) is very rarely taken systematically into account when the goodness of models is evaluated. Here we show a simple idea to assess the expected improvement (win) an a real-word situation for any classification problem at the example of campaign optimization and credit rating. Of course, this also allows to compare the real-world impact on win of different classifiers.

Continuous Integration for Machine Learning Applications – A Practical Example

Continuous Integration for Machine Learning Applications – A Practical Example

Summary:

Machine learning models are becoming obsolete and must be retrained – this is the current widespread tenor. Is this actually true? And if yes, which components does a CI/CD pipeline for machine learning really need – and which are optional? How can the whole thing be implemented without building a complete Machine Learning Platform team? And which challenges are still difficult to solve at present? A field report including (mis)decisions, which will help to choose the right path for your own challenges.

Using Matrix Factorization for Real-time Personalization of Volkswagen Websites

Using Matrix Factorization for Real-time Personalization of Volkswagen Websites

Summary:

AI technology allows improving the UX significantly by showing the most relevant personalized content in real-time depending on the user’s browsing activity. By using Matrix Factorization with the Implicit Alternating Least Squares method we can calculate individual ratings for every user. Those ratings can be used to rank all the contents available on the website, and only the most relevant content is shown to each single user. The result: increased engagement and a lead rate uplift of 42%.

Achieving Operational Excellence by Using Data in Health Care

Speakers:

Ola Kotun

Achieving Operational Excellence by Using Data in Health Care

Summary:

Healthcare is about people- the patients receiving care, the people delivering it and those creating ways to support this. Data via predictive analytics is an enabler to supports a healthcare provider or system to achieve operational excellence to help problems thus faced by the people. Doing this well not only delivers patient outcomes but supports preventative and proactive population health management. In this table discussion, we would like to discuss these questions: What is the connection between operational excellence, predictive data analytics and artificial intelligence? What are the driving forces for achieving organizational performance by using artificial intelligence? What are the barriers to achieving organizational performance using artificial intelligence?

Bringing Life and Motion to AI Explainability in Context Of Chronic Kidney Disease Prediction

Bringing Life and Motion to AI Explainability in Context Of Chronic Kidney Disease Prediction

Summary:

SHAP is a great tool to help developers and users understand black box models. To push it to the next level, we will show how to leverage on Dash, SHAP, gifs, LSTM and auto-encoders to generate interactive dashboards with animations and visual representations to understand how different AI models learn and change their minds while progressively trained with growing amounts of data. We will show this application in the context of Chronic Kidney Disease prediction and broader Healthcare AI.

Designing Geo-Experiments at Google: A Privacy-friendly Tool to Measure Advertising Incrementality

Designing Geo-Experiments at Google: A Privacy-friendly Tool to Measure Advertising Incrementality

Summary:

Geo-experiments – advertising experiments where the treatment and control groups are chosen based on users’ locations – provide a privacy-friendly alternative to cookie-based online experiments that can also be used to measure offline effects of online advertising. This session discusses the algorithms we use at Google to design the experiment regions based on geographical user behavior, and the rigorous statistical methods to analyze randomized experiments based on these regions.

Causal Geographical Experimentation in Marketing Made Easy

Causal Geographical Experimentation in Marketing Made Easy

Summary:

The changes in the ads ecosystem have led marketers to lean on existing aggregate experimentation tools that assume a predetermined treatment effect. Choosing the treatment group to ensure you have high chances of detecting an effect is non-trivial. Built by Meta Open Source, GeoLift solves this problem by building well powered geographical experiments. Join us to go over why geographical experiments are necessary and their implications in the marketing industry, along with a demo of GeoLift.

Predicting Wall Street Using Artificial Intelligence and New Alternative Data Sources

Predicting Wall Street Using Artificial Intelligence and New Alternative Data Sources

Summary:

The latest formula for making sound investment decisions involves mining new alternative data sources, using predictive analytics, swarm intelligence, reinforcement learning, and high-performance computing. In this talk, Prof. Anasse Bari explains how those components are driving value in the world of finance and how new Artificial Intelligence algorithms are reinventing Wall Street. He will filter fact from fiction, and outline successful use cases that he has recently led (e.g. how social performance and consumer reviews could be used as predictive features, how to derive actionable insights from geospatial images.). Prof. Bari will also present an overview that can help you design an AI strategy and implement viable solutions to generate a “predictive analytics-based investment thesis.”

Newsletter

Knowledge is everything!
Sign up for our newsletter to receive:

  • 10% off your ticket!
  • insights, interviews, tips, news, and much more about Machine Learning Week Europe
  • price break reminders