Development of a Secure Private Neural Network Capability

20AERP09_04

09/01/2020

Abstract
Content

Machine Learning (ML) tools like Deep Neural Networks (DNNs) have gained widespread popularity due to their ability to quickly and accurately perform discriminative tasks, such as object detection and classification. However, current implementations of this concept have several drawbacks. First, traditional DNNs require access to unprotected (unencrypted) data. Even if the data is secured and the ML tool is made compatible for use with encrypted data, the resulting operational performance is slowed to the point that it renders the approach intractable. Second, recent research has shown many DNNs are susceptible to white box (full access to the machine learning tool and operations) and black box (only access to system input and output) attacks, allowing adversaries to maliciously manipulate the ML tool's output.

In its short history, this concept has been successfully applied to a broad spectrum of problems: speech and image recognition, medical imagery diagnostics, drug discovery, customer relationship management, fraud detection, and military applications, among many others. In many of these applications, a critical factor has been the need to access large volumes of data, which created privacy concerns and opened the potential for insights that might have inappropriate or unwanted implications. Those factors were most obvious in applications involving patient data and in military applications. Although these problem domains could greatly benefit from the capabilities of an ML tool, these critical security concerns thwart their use.

Meta TagsDetails
Pages
3
Citation
"Development of a Secure Private Neural Network Capability," Mobility Engineering, September 1, 2020.
Additional Details
Publisher
Published
Sep 1, 2020
Product Code
20AERP09_04
Content Type
Magazine Article
Language
English