TALKS

AI in Action

Adversarial Machine Learning

Most machine learning algorithms involve optimizing a single set of parameters to decrease a single cost function. In adversarial machine learning, two or more "players" each adapt their own parameters to decrease their own cost, in competition with the other players. In some adversarial machine learning algorithms, the algorithm designer contrives this competition between two machine learning models in order to produce a beneficial side effect. For example, the generative adversarial networks framework involves a contrived conflict between a generator network and a discriminator network that results in the generator learning to produce realistic data samples. In other contexts, adversarial machine learning models a real conflict, for example, between spam detectors and spammers. In general, moving machine learning from optimization and a single cost to game theory and multiple costs has led to new insights in many application areas.

AI Patterns and Antipatterns

Building artificial solutions in the real world is a road full of challenges and very few answers. From training, regularization to deployment and monitoring we are just starting to figure out what are the best practices used to build AI solutions at scale. 

This session presents a series of patterns and anti-patterns of large scale AI solutions. Covering the entire lifecycle of AI solutions, from training to deployment, we will explore patterns and architectures that should be followed to build AI solutions at scale as well as not-so-obvious anti-patterns that can result in major disasters. To keep things practical, we will explore our AI patterns and anti-patterns through the lens of real customers building real AI solutions. 


First Line of Defense: AI in Cybersecurity

Given the state of cybersecurity today, the implementation of AI systems into the mix can serve as a real turning point. New AI algorithms use Machine Learning (ML) to adapt over time, and make it easier to respond to cybersecurity risks. However, new generations of malware and cyber-attacks can be difficult to detect with conventional cybersecurity protocols. They evolve over time, so more dynamic approaches are necessary.

The Unity Engine as a Platform for Deep Reinforcement Learning Research

This talk will walk through using Unity with the ML-Agents toolkit for conducting Machine Learning research, and aiding the design and testing of game behavior. We will walk through both the motivation and design of the platform, with a particular focus on scenarios applicable to game AI. The tutorial will also include an explanation of the kinds of algorithms being used to train machine learning agents, including various Reinforcement Learning and Supervised Learning methods.

Serverless Deployment of Deep Learning Models

Deep learning becomes more and more essential for a lot of businesses for internal and external projects. One of the main issues with deployment is finding right way to deploy model within the company. Serverless approach for deep learning provides cheap, scalable and reliable architecture for it.


Serverless architecture changes the rules of the game - instead of thinking about cluster management, scalability and query processing, you can now focus specifically on training the model. The downside within this approach is that you have to keep in mind certain limitations and how to integrate your model in a right fashion.


I will show how to deploy Tensorflow model for image captioning on AWS infrastructure. AWS Function-as-a-Service solution - Lambda Function - can achieve significant results - 20-30k runs per one dollar with a completely pay-as-you-go model, 40k functions can be run in parallel and easily integrates with other AWS services. It will allow you to easily connect it to API, chatbot, database, stream of events, etc.

Understanding World Food Economy with Satellite Images

It has become possible to observe the growing process from satellites daily at a global scale. Based on it we can identify and share agriculture-specific signals (insights) like - presence of farming activity, presence of irrigation systems, crop classification and productivity assessment. A pipeline starts with a set of images specifically designed for daily monitoring the growth of commodity crops: corn, soybean, rice and wheat. To process this data we use our processing and delivery system with ML (boosting) used for understanding vegetation patterns and AI for scaling the models on other climate zones.

Scaling Computer Vision in the Cloud with Kubernetes and TensorFlow

Reza Zadeh offers an overview of Matroid’s Kubernetes deployment, which provides customized computer vision and stream monitoring to a large number of users, and demonstrates how to customize computer vision neural network models in the browser. Along the way, Reza explains how Matroid builds, trains, and visualizes TensorFlow models, which are provided at scale to monitor video streams.

Deep Learning and Natural Language Processing for Product Title Summarization

Online marketplaces often have millions of products, and the product titles are typically intentionally made quite long for the purpose of being found by search engines. With voice shopping on the verge of taking off (voice shopping is estimated to hit $40+ billion across U.S. and U.K. by 2022), short versions (summaries) of product titles are desired to improve user experience with voice shopping. 


In this talk, we present a few different approaches to solve this problem using Natural Language Processing and Deep Learning. We give a historical overview of the technology advancement in these approaches, and compare the evaluation results on a real world dataset.

Scalable Training with Distributed Deep Learning

Modern deep learning models are getting more and more computationally demanding which has started hurting experimentation and hyperparameter tuning speed. GPUs are getting bigger and stronger, but vertical scaling is too slow to keep up with the demand; we need to go horizontal, multi-GPU and multi-machine.


In this talk, I'll present problems that different distributed learning approaches solve, so you learn when to start looking at distributed learning if you encounter the presented hindrances. I'll also showcase the latest distributed learning technologies, e.g., Distributed TensorFlow and Horovod.

Using AI in Medicine: Analysis of Auscultatory Sounds

Health care generates tremendous amount of data, doubling every 24 months, the analysis of which far surpasses the capabilities of human minds. However, it poses no problem to algorithms based on neural networks which revolutionizes at present more and more fields of medicine. People often fall ill with various respiratory system infections, thus requiring diagnosis and distinguishing minor infections from the more severe ones. At present the only option is to visit a doctor, who is going to auscultate the patient using a stethoscope and decide on further procedure. Doctor’s diagnosis is subjective and encumbered with a large probability of error measurements. The aim of StethoMe™ project is creating an intelligent, electronic stethoscope. Our device is going to analyze recorded sounds and based on AI algorithms inform the user if there are any abnormal sounds registered. 

Why Scala for Data Science?

When it comes to Machine Learning or Deep Learning many people believe that Python is the only choice in terms of programming language. There are several objective reasons to pick it up for those purposes, but a valid alternative exists. This talk will walk through using Scala to bring ML/DL to the JVM stack in order to achieve the same you could in Python. You will also learn that these two popular programming languages aren't always mutually exclusive and how they can work together in some scenarios.

Using Word2Vec in Postgres

Currently, a tremendous amount of data is lives in relational databases. Bringing machine learning models into the database opens new use cases for consumers of existing data warehouses. 


This talk will demonstrate how to use word2vec models in a Postgres database to facilitate semantic search of job posts. Attendees will learn how to structure models for usage in a relational database, and how to improve the performance of queries that operate against a mixture of relational data and machine learning models.

Exploring Earth with Computer Vision

Our planet is teeming with human activity: agriculture, energy, logistics, and much more. The ongoing explosion of satellite imagery, combined with computer vision and other forms of machine learning, provides a means to understand these processes at a global scale. This talk will give a brief overview of satellite imagery (pros & cons, how it differs from normal imagery, some practical considerations), and a tour of applications of computer vision to satellite imagery, such as visual search over the planet.

Tuning the Un-Tunable: Advanced Techniques for Deep Learning

Tuning models with lengthy training cycles, typically found in deep learning, can be extremely expensive to train and tune. In certain instances, this high cost may even render tuning infeasible for a particular model. Even if tuning is feasible, it is often extremely expensive. Popular methods for tuning these types of models, such as evolutionary algorithms, typically require several orders of magnitude the time and compute as other methods. And techniques like parallelism often come with a degradation of performance trade-off that results in the use of many more expensive computational resources. This leaves most teams with few good options for tuning particular expensive deep learning functions. 


But new methods related to task sampling in the tuning process create the chance for teams to dramatically lower the cost of tuning these models. This method, referred to as multitask optimization, combines “strong anytime performance” from bandit-based methods with “strong eventual performance” of Bayesian optimization. As a result, this process can unlock tuning for some deep learning models that have particularly lengthy training and tuning cycles.


During this talk, Scott Clark, CEO of SigOpt, walks through a variety of methods for training models with lengthier training cycles before diving deep on this multitask optimization functionality. The rest of the talk will focus on how this type of method works and explain the ways in which deep learning experts are deploying it today. Finally, we will talk through implications of early findings in this area of research and next steps for exploring this functionality further. This is a particularly valuable and interesting talk for anyone who is working with large data sets or complex deep learning models.

An Update on Scikit-learn

This talk will provide a brief introduction into scikit-learn and it's part in the machine learning ecosystem. It will also discuss recent additions to scikit-learn, such as better integration with pandas and better support for missing values and categorical data.


We'll end some future directions, including better control over parallelization and better use of multi-core systems, as well as better tools for model inspection and model debugging.

Society and Ethics

Understanding the Limitations of AI: When Algorithms Fail

Machine learning based algorithms are becoming more and more ubiquitous—determining high stakes outcomes like who is deemed a criminal and who should be hired for a particular job. Many of these systems use tools ranging from language translation to face recognition systems that do not work perfectly and can fail in unpredictable ways. I will outline examples (some of which are drawn from my research) where these systems fail, leading to harmful outcomes for targeted individuals. Currently, there are no standards or laws in place to determine what types of AI based tools can be used in which scenarios. I will discuss research proposing standards to increase transparency and accountability, in order to mitigate harmful outcomes resulting from the use of black box models without knowledge of the data they were trained on.

Artificial Intelligence: Panacea or Peril?

Adam Cheyer has been working professionally in artificial intelligence (AI) for more than 30 years, and in the last few years, milestones have been achieved that he never thought he would see in his lifetime. These stunning advances in the field have led some to question whether things are going too fast, and whether our role as the most intelligent beings on this planet is in jeopardy. Is AI the panacea to all our problems, or the peril we need most fear?

Integrating Ethics into the Agile Delivery of AI Systems

8 out of 10 AI-first products fail. One of the reasons behind this gloomy stat is that people do not trust the system built. AI ethics isn’t just a philosophical question and moral responsibility, it’s a key ingredient to making your product trustworthy and more likely to be adopted by end users or clients. This workshop will take you through how you can plan you sprint to include the bare minimum of ethical thinking and planning into your roadmap before deploying AI to production. 

Super Intelligence Safety and Security

Many scientists, futurologists and philosophers have predicted that humanity will achieve a technological breakthrough and create Artificial General Intelligence (AGI) within the next one hundred years. It has been suggested, that AGI may be a positive or negative factor in all domains of human endeavor including science, business and politics. After summarizing the arguments for why AGI may pose risk, I will survey the field’s proposed responses, with particular focus on solutions advocated in my own work. Dr. Yampolskiy’s presentation will help researchers to have a good understanding of the emerging opportunities and problems.

Building CV Models for Evaluation of the Development of Visual Attention During Infancy

Infants mainly use visual cues for example holding eye gaze on objects to interact with the world before they acquire verbal skills. Reading and understanding those cues correctly is integral in the childhood development and relationships. It is still unknown to developmental researchers how infants begin to recognize objects and gestures visually, interact with them and associate names to the objects. Scientists have been studying the visual development of infants by observing child-parent interactions during table top toy play experiments. They mound head cameras on infants and observe their interaction with parent through infant's point of view. They process and analyze the large volume of recorded video data frame by frame and manually to study the development of visual attention in infants. Recent advances in head cameras provides new opportunities for computer vision researchers to build computational models that help developmental scientist, process videos and evaluate infant’s visual attention in a new way, faster and more accurately. Such models can reveal new patterns in the developmental process of visual attention in infants which cannot be discovered by human eye as head cameras are in constant motion due to large and random head movements generated by infants. In this presentation, we explain how we built computer vision models to study the development of infant’s visual attention on objects and gestures and evaluate the results.

Ethical Algorithms

As machine learning is applied to increasingly important tasks, we are being confronted with ethical issues. Machine learning algorithms can inadvertantly violate the privacy of people whose data is used for training, and can exacerbate discriminatory decision making. One retrograde solution would be to scale back our use of these technologies, or to attempt to police or regulate them by hand. But this would neither scale, nor allow us to reap the benefits of modern data analysis. In this talk, we walk through technologies that have been recently developed to embed both privacy and fairness protections directly into the design of algorithms. 

AI in the Computational Arts

This talk is about AI within the computational arts. It will review the broader history of computer science and AI methodology being applied to new media arts, including prominent examples from artificial life, computer music, and generative art. It will then shift to more contemporary examples from deep learning, particularly deep generative models and optimization-based methods like DeepDream, style transfer, and texture synthesis, as well as methods for real-time machine learning in the context of musical performance and interactive installation. The talk will end by discussing the role of creativity in computer science education, and presenting relevant tools made by the speaker.

Interpretability in Machine Learning

Predictive models have begun to aid human decisions in a variety of domains. The recent rise of deep learning is increasingly pushing the boundaries of accuracy such models can achieve. At the same time, these deep learning systems have also brought the notion of models-as-black-boxes to the forefront. A major hurdle in their increased adoption, especially in strictly regulated fields, is the challenge of providing human-interpretable predictions.


We will discuss in this talk: the need and scope of interpretability in machine learning; the relation of interpretability to fairness and transparency of algorithms; the lack of consensus and possible common ground among the many interpretations of interpretability; and the future directions and desiderata for interpretability in machine learning

Can AI Democratize Investing for the Benefit of Society? And Should It? 

Much has been said about the potential for AI to benefit society. We'll look at a specific application of this in the finance sector. Specifically, we''ll examine how AI has the potential to change the landscape for financial investing long term, particularly in the area of sustainability (investing for a better world). This raises a number of questions: How might this shift the balance who will benefit from AI-determined financial predictions? Will AI's influence in fintech wind up opening the door for smaller parties, or will big money ultimately be the benefactor? Is more access to information actually a good thing? Is this generally true for most industries? We'll look at approaches, models, and code that are pushing this discussion from theory to reality.

New Research

The Natural Language Decathlon: Multitask Learning as Question Answering

Deep learning has improved performance on many natural language processing (NLP) tasks individually. However, general NLP models cannot emerge within a paradigm that focuses on the particularities of a single metric, dataset, and task. We introduce the Natural Language Decathlon (decaNLP), a challenge that spans ten tasks: question answering, machine translation, summarization, natural language inference, sentiment analysis, semantic role labeling, zero-shot relation extraction, goal-oriented dialogue, semantic parsing, and commonsense pronoun resolution. We cast all tasks as question answering over a context. Furthermore, we present a new Multitask Question Answering Network (MQAN) jointly learns all tasks in decaNLP without any task-specific modules or parameters in the multitask setting. MQAN shows improvements in transfer learning for machine translation and named entity recognition, domain adaptation for sentiment analysis and natural language inference, and zero-shot capabilities for text classification. We demonstrate that the MQAN's multi-pointer-generator decoder is key to this success and performance further improves with an anti-curriculum training strategy. Though designed for decaNLP, MQAN also achieves state of the art results on the WikiSQL semantic parsing task in the single-task setting. We release code for procuring and processing data, training and evaluating models, and reproducing all experiments for decaNLP.

Learning Complex Policies in Deep Reinforcement Learning

Deep Reinforcement Learning methods have achieved significant successes recently by marrying the representation learning power of deep networks and the control learning abilities of RL. This has resulted in some of the most significant recent breakthroughs in AI such as the Atari game player and the Alpha Go engine from Deepmind. This success has opened up new lines of research and revived old ones in the RL community. One direction that has not received much attention is that of learning structured policies. In this talk, I will describe three recent studies that we conducted in learning complex policies. The first is FiGAR, that learns to adaptively learn the granularity of decision making. The second is A2T, that learns to transfer policies from multiple source tasks to a target task. The third is on Risk Averse Imitation Learning (RAIL), that tries to minimize the tail risk when learning to imitate expert policies.

Towards Ambient Intelligence in AI-Assisted Healthcare Spaces

Artificial intelligence has begun to impact healthcare in areas including electronic health records, medical images, and genomics. But one aspect of healthcare that has been largely left behind thus far is the physical environments in which healthcare delivery takes place: hospitals, clinics, and assisted living facilities, among others. In this talk I will discuss our work on endowing healthcare spaces with ambient intelligence, using computer vision-based human activity understanding in the healthcare environment to assist clinicians with complex care. I will first present pilot implementations of AI-assisted healthcare spaces where we have equipped the environment with visual sensors. I will then discuss our work on human activity understanding, a core problem in computer vision. I will present deep learning methods for dense and detailed recognition of activities, and efficient action detection, important requirements for ambient intelligence, and I will discuss these in the context of several clinical applications. Finally, I will present work and future directions for integrating this new source of healthcare data into the broader clinical data ecosystem.

Finding Products and Brands in Massive Amounts of Data

 In today’s world, there are billions of connected devices. We are reaching that point of spaghetti IoT were these devices are connected but they lack smartness or cognitive capabilities.


In this talk we will focus of applying machine learning, deep learning and data science and making this Connected device smarter and efficient. How can these devices not just talk to each other but start to understand and become capable to process that information and provide some valuable insights? This talk will focus on how machine learning models, deep learning models can help IoT devices make smarter decisions with using Watson IoT platform and talk about other Watson services on IBM Cloud. 

Finding Products and Brands in Massive Amounts of Data

Information extraction has undergone big data transitions with unique challenges (i.e. massiveness and noisiness) and advantages (i.e. corpus statistics and data redundancy). Identifying names of products in text is important in the business setting for understanding marketing need and revenue growth. We identify the names of products in massive free-form conversation texts and classify/type them into a product catalogue. We implement three approaches to the problem: supervised (with crowd annotated training data), semi-supervised (using patterns), and unsupervised (with corpus statistics). First, our existing supervised models shows that deep learning model outperforms sequential graphical models in two specific domain can achieve top results. We also present preliminary finding of our semi-supervised and unsupervised methods. While supervised methods can achieve top results, we need a method is that is repeatable with little human effort and insight in another domain. We present a preliminary unsupervised method that leverages vastness and redundancy in big data.  

Generating Natural Language Explanations of Visual Decisions

Modern neural networks excel at numerous tasks, such as fine grained category recognition and visual question answering. However, they do not have the ability to explain or justify their decisions. In this work, I will present models which not only perform a standard vision task like classification, but also justify their decision using natural language.

Decentralized AI for the Rest of Us

Building artificial solutions in the real world is a road full of challenges and very few answers. From training, regularization to deployment and monitoring we are just starting to figure out what are the best practices used to build AI solutions at scale. 

This session presents a series of patterns and anti-patterns of large scale AI solutions. Covering the entire lifecycle of AI solutions, from training to deployment, we will explore patterns and architectures that should be followed to build AI solutions at scale as well as not-so-obvious anti-patterns that can result in major disasters. To keep things practical, we will explore our AI patterns and anti-patterns through the lens of real customers building real AI solutions. 

Brought to you by

Our mission is to connect professionals, developers and students with tech industry experts from around the world through an interactive online conference. With our inventive streaming platform, conference goers have the ability to watch live talks, engage in live Q&A, collaborate together and book one-to-one sessions with their favorite speakers. 

Our Vision: Continue to be the world’s largest online developer conference series, assemble the world’s most innovative and disruptive speakers, and inspire professionals, developers and students to learn and improve tech skills.