Jiaqing Chen

Ph. D. student of Computer Science
Arizona State University

Principal Investigator
Dr. Ross Maciejewski

Research Laboratory
VADER Lab

Research Interests
Big Data, Deep Learning, Interactive Machine Learning and Explainable AI.

Home

News

Projects

Publications

Education

CV

Social

Contact:
School of Computing and Augmented Intelligence
Arizona State University
342DB, 699 S. Mill Avenue
Tempe, AZ 85281

Hosted on GitHub Pages — Theme by orderedlist

Publications

[VLDB 2022] Lixi Zhou, Jiaqing Chen, Amitabh Das, Hong Min, Lei Yu, Ming Zhao, and Jia Zou. "Serving Deep Learning Models with Deduplication from Relational Databases." VLDB 2022, PVLDB Volume 15 Issue 10. [PDF]

Abstract: There are significant benefits to serve deep learning models from relational databases. First, features extracted from databases do not need to be transferred to any decoupled deep learning systems for inferences, and thus the system management overhead can be significantly reduced. Second, in a relational database, data management along the storage hierarchy is fully integrated with query processing, and thus it can continue model serving even if the working set size exceeds the available memory. Applying model deduplication can greatly reduce the storage space, memory footprint, cache misses, and inference latency. However, existing data deduplication techniques are not applicable to the deep learning model serving applications in relational databases. They do not consider the impacts on model inference accuracy as well as the inconsistency between tensor blocks and database pages. This work proposed synergistic storage optimization techniques for duplication detection, page packing, and caching, to enhance database systems for model serving. We implemented the proposed approach in netsDB, an object-oriented relational database. Evaluation results show that our proposed techniques significantly improved the storage efficiency and the model inference latency, and serving models from relational databases outperformed existing deep learning frameworks when the working set size exceeds available memory.