Privacy-Preserving Multi-Task Learning

Abstract

Multi-task learning (MTL), improving learning performance by transferring information between related tasks, has drawn more and more attention in the data mining field. To tackle tasks whose data are stored at different locations (or nodes), distributed MTL was proposed. It not only enhances the learning performance but also improves the computing efficiency since it transforms the original centralized computing framework into a distributed computing framework under which computations can be done in parallel. The major drawback of the distributed MTL is a potential violation of confidentiality when the data stored at each node contain sensitive information (e.g., medical records). Some distributed MTL algorithms were designed to protect the original by only transferring aggregate information (e.g., supports or gradients) from each node to a server who combines the received information to produce the desired models. However, since aggregate data may still leak sensitive information, the security guarantee of the existing solutions cannot be formally proved or verified. Thus, the goal of this paper is to develop a provable privacy-preserving multi-task learning (PP-MTL) protocol that incorporates the state of the art cryptographic techniques to achieve the best security guarantee. We also conducted experiments to demonstrate the efficiency of our proposed method.

Meeting Name

2018 IEEE International Conference on Data Mining, ICDM 2018 (2018: Nov. 17-20, Singapore, Singapore)

Department(s)

Computer Science

Research Center/Lab(s)

Intelligent Systems Center

Comments

This research was partially supported by the National Science Foundation (NSF) via the grant number: 1755946 and University of Missouri Research Board (UMRB) via the proposal number: 4991.

Keywords and Phrases

Aggregates; Data privacy; Distributed computer systems; Efficiency; Learning systems; Linearization; Computing efficiency; Computing frameworks; Cryptographic techniques; Distributed computing frameworks; Learning performance; Multitask learning; Privacy preserving; Sensitive informations; Data mining; Multi task learning

International Standard Book Number (ISBN)

978-1-5386-9159-5

International Standard Serial Number (ISSN)

1550-4786; 2374-8486

Document Type

Article - Conference proceedings

Document Version

Citation

File Type

text

Language(s)

English

Rights

© 2018 Institute of Electrical and Electronics Engineers (IEEE), All rights reserved.

Publication Date

01 Nov 2018

Share

 
COinS