Privacy Vulnerability of Split Computing to Data-Free Model Inversion Attacks


Xin Dong (Harvard Univeristy),* Hongxu Yin (NVIDIA ), Jose M. Alvarez (NVIDIA), Jan Kautz (NVIDIA), Pavlo Molchanov (NVIDIA), H.T. Kung (Harvard University)
The 33rd British Machine Vision Conference

Abstract

Mobile edge devices see increased demands in deep neural networks (DNNs) inference while suffering from stringent constraints in computing resources. Split computing (SC) emerges as a popular approach to the issue by executing only initial layers on devices and offloading the remaining to the cloud. Prior works usually assume that SC offers privacy benefits as only intermediate features, instead of private data, are shared from devices to the cloud. In this work, we debunk this SC-induced privacy protection by presenting a novel data-free model inversion method and demonstrating sample inversion where private data from devices can still be leaked with high fidelity from the shared feature even after tens of neural network layers. We propose Divide-and-Conquer Inversion (DCI) which partitions the given deep network into multiple shallow blocks and inverts each block with an inversion method. Additionally, cycle-consistency technique is introduced by re-directing the inverted results back to the model under attack in order to better supervise the training of the inversion modules. In contrast to prior art based on generative priors and computation-intensive optimization in deriving inverted samples, DCI removes the need for real device data and generative priors, and completes inversion with a single quick forward pass over inversion modules. For the first time, we scale data-free and sample-specific inversion to deep architectures and large datasets for both discriminative and generative networks. We perform model inversion attack to ResNet and RepVGG models on ImageNet and SNGAN on CelebA and recover the original input from intermediate features more than 40 layers deep into the network. Our method reveals a surprising privacy vulnerability of modern DNNs to model inversion attacks, and provides a tool for empirically measuring the amount of potential data leakage and assessing the privacy vulnerability of DNNs under split computing.

Video



Citation

@inproceedings{Dong_2022_BMVC,
author    = {Xin Dong and Hongxu Yin and Jose M. Alvarez and Jan Kautz and Pavlo Molchanov and H.T. Kung},
title     = {Privacy Vulnerability of Split Computing to Data-Free Model Inversion Attacks},
booktitle = {33rd British Machine Vision Conference 2022, {BMVC} 2022, London, UK, November 21-24, 2022},
publisher = {{BMVA} Press},
year      = {2022},
url       = {https://bmvc2022.mpi-inf.mpg.de/0230.pdf}
}


Copyright © 2022 The British Machine Vision Association and Society for Pattern Recognition
The British Machine Vision Conference is organised by The British Machine Vision Association and Society for Pattern Recognition. The Association is a Company limited by guarantee, No.2543446, and a non-profit-making body, registered in England and Wales as Charity No.1002307 (Registered Office: Dept. of Computer Science, Durham University, South Road, Durham, DH1 3LE, UK).

Imprint | Data Protection