Jiaqing Chen

Ph. D. student of Computer Science
Arizona State University

Principal Investigator
Dr. Ross Maciejewski

Research Laboratory
VADER Lab

Research Interests
Big Data, Deep Learning, Interactive Machine Learning and Explainable AI.

Home

News

Projects

Publications

Education

CV

Social

Contact:
School of Computing and Augmented Intelligence
Arizona State University
342DB, 699 S. Mill Avenue
Tempe, AZ 85281

Hosted on GitHub Pages — Theme by orderedlist

Projects

[ONNX-MLIR]This project provides compiler technology to transform a valid Open Neural Network Exchange (ONNX) graph into code that implements the graph with minimum runtime support. It implements the ONNX standard and is based on the underlying LLVM/MLIR compiler technology.

[A Comparison of Decision Forest Inference Platforms from A Database Perspective]Decision forest, including RandomForest, XGBoost, and LightGBM, is one of the most popular machine learning techniques used in many industrial scenarios, such as credit card fraud detection, ranking, and business intelligence. Because the inference process is usually performance-critical, a number of frameworks were developed and dedicated for decision forest inference, such as ONNX, TreeLite from Amazon, TensorFlow Decision Forest from Google, HummingBird from Microsoft, Nvidia FIL, and lleaves. However, these frameworks are all decoupled with data management frameworks. It is unclear whether in-database inference will improve the overall performance. In addition, these frameworks used different algorithms, optimization techniques, and parallelism models. It is unclear how these implementations will affect the overall performance and how to make design decisions for an in-database inference framework. In this work, we investigated the above questions by comprehensively comparing the end-to-end performance of the aforementioned inference frameworks and netsDB, an in-database inference framework we implemented. Through this study, we identified that netsDB is best suited for handling small-scale models on large-scale datasets and all-scale models on small-scale datasets, for which it achieved up to hundreds of times of speedup. In addition, the relation-centric representation we proposed significantly improved netsDB's performance in handling large-scale models, while the model reuse optimization we proposed further improved netsDB's performance in handling small-scale datasets.

[Serving Deep Learning Models with Deduplication from Relational Databases]There are significant benefits to serve deep learning models from relational databases. First, features extracted from databases do not need to be transferred to any decoupled deep learning systems for inferences, and thus the system management overhead can be significantly reduced. Second, in a relational database, data management along the storage hierarchy is fully integrated with query processing, and thus it can continue model serving even if the working set size exceeds the available memory. Applying model deduplication can greatly reduce the storage space, memory footprint, cache misses, and inference latency. However, existing data deduplication techniques are not applicable to the deep learning model serving applications in relational databases. They do not consider the impacts on model inference accuracy as well as the inconsistency between tensor blocks and database pages. This work proposed synergistic storage optimization techniques for duplication detection, page packing, and caching, to enhance database systems for model serving. We implemented the proposed approach in netsDB, an object-oriented relational database. Evaluation results show that our proposed techniques significantly improved the storage efficiency and the model inference latency, and serving models from relational databases outperformed existing deep learning frameworks when the working set size exceeds available memory.

[Interactive Visualization Server for Geospatial Data Exploration]This project is part of UCR-STAR, a Spatio-temporal Active Repository. Speed up the response time of submitted requests by creating an intermediate cache structure, which keeps small images in memory to save the computation cost of generating these images. Allow users to submit requests to visualize new datasets and it automatically adds it to the system by calling back-end operations. Store the dataset information in a NoSQL database, MongoDB, to allow the system to support a large number of datasets and provide datasets’ information while the front end sends requests.