Jiliang Tang, Assistant Profesor, is awarded funding from Army Research Office

Abstract:

As new generalizations of traditional deep models such as CNNs and RNNs adapted to graph structured data, GNNs inherit both advantages and disadvantages of traditional deep models. Traditional deep models are often treated as black-boxes and lack human-intelligible explanations, and they are easily fooled by adversarial attacks. Recent research advancements have demonstrated that GNNs also share the same drawbacks as traditional deep models, i.e., lack of interpretability and vulnerable to adversarial examples. These drawbacks have raised tremendous concerns to adopt GNNs in many real-world applications. For example, in threat detection and prevention, if terrorists disguise their personal identity information or find the vulnerability of GNNs to evade the detection system, it will cost huge loss to the army. Without understanding and verifying the inner working mechanisms and robustness, GNNs cannot be fully trusted. It will prevent their use in critical applications pertaining to safety, fairness, and privacy such as threat detection and prevention, autonomous driving, and healthcare. Thus, pushing the research boundaries of GNNs in terms of interpretability and vulnerability and building stable GNN models (or stability) have the potential to impact the successful adoption of GNNs in a broader range of fields 

(Date Posted: 2021-04-06)