Created at 11am, Mar 4
Ms-RAGArtificial Intelligence
0
SOFTENED SYMBOL GROUNDING FOR NEUROSYMBOLIC SYSTEMS
4Es8cqUN-lCue2GPmCeS25Q9UObKubFsv30kg2KX4Qw
File Type
PDF
Entry Count
84
Embed. Model
jina_embeddings_v2_base_en
Index Type
hnsw

Zenan Li 1, Yuan Yao 1, Taolue Chen 2, Jingwei Xu 1, Chun Cao 1, Xiaoxing Ma 1, Jian Lü 11- State Key Lab of Novel Software Technology and Department of Computer Science and Technology, Nanjing University, China2- Department of Computer Science, Birkbeck, University of London, UK lizn@smail.nju.edu.cn, t.chen@bbk.ac.uk, {y.yao,jingweix,caochun,xxm,lj}@nju.edu.cnABSTRACTNeuro-symbolic learning generally consists of two separated worlds, i.e., neural network training and symbolic constraint solving, whose success hinges on symbol grounding, a fundamental problem in AI. This paper presents a novel, softened symbol grounding process, bridging the gap between the two worlds, and resulting in an effective and efficient neuro-symbolic learning framework. Technically, the framework features (1) modeling of symbol solution states as a Boltzmann distribution, which avoids expensive state searching and facilitates mutually beneficial interactions between network training and symbolic reasoning; (2) a new MCMC technique leveraging projection and SMT solvers, which efficiently samples from disconnected symbol solution spaces; (3) an annealing mechanism that can escape from sub-optimal symbol groundings. Experiments with three representative neuro-symbolic learning tasks demonstrate that, owining to its superior symbol grounding capability, our framework successfully solves problems well beyond the frontier of the existing proposals.

For this regression task, we define the symbol z as a a multivariate Gaussian with diagonal covariance, i.e., z N (f(x), 2I), where f(x) indicates the predicted distances from all nodes to the given destination. The dimension of z is 30, and the projection is defined by dropping [z5, z10, z15, z20, z25]. The random walk in each step selects a component of z and adds a uniform noise from [5, 5] on it. Figure 3 shows the accuracy results, i.e., the percentage of shortest paths that are correctly obtained. To better understand the effectiveness, we additionally train a reference model (denoted as SUP) with directly supervised labels (i.e., the actual distance from each node to the destination). It can be observed that, our approaches not only outperform the existing competitors, but also achieve a comparable result with the directly supervised model SUP. 8 Published as a conference paper at ICLR 2023
id: 9790d9c5f4909506c14ebe7ac6a870a6 - page: 8
6 RELATED WORK Neuro-symbolic learning. To build a robust computational model integrating concept acquisition and manipulation (Valiant, 2003), neuro-symbolic computing provides an attractive way to reconcile neural learning with logical reasoning. Numerous studies have focused on symbol grounding to enable conceptualization for neural networks. An in-depth introduction can be found in recent surveys (Marra et al., 2021; Garcez et al., 2022). According to the way the symbolic reasoning component is handled, we categorize the existing work as follows.
id: 2d60c7f830ab23276a59624d538429fa - page: 9
Learning with logical constraints. Methods in the first category parse the symbolic reasoning into an explicit logical constraint, and then translate the logical constraint into a differentiable loss function which is incorporated as constraints or regularizations in network training. Although several methods, e.g., Hu et al. (2016); Xu et al. (2018); Nandwani et al. (2019); Fischer et al. (2019); Hoernle et al. (2022), are proposed to deal with many different forms of logical constraints, most of them tend to avoid the symbol grounding. In other words, they often only confine the networks behavior, but not guide the conceptualization for the network.
id: 6f6832e4ff5e319603d33332096d2886 - page: 9
Learning from symbolic reasoning. Another way is to regard the networks output as a predicate, and then maximize the likelihood of a correct symbolic reasoning (a.k.a. learning from entailment (Raedt et al., 2016, Sec. 7)). In some of these methods (Manhaeve et al., 2018; Yang et al., 2020; Pryor et al., 2022; Winters et al., 2022), the symbol grounding is often conducted in an implicit manner (as shown by Proposition 2), which limits the efficacy of network learning. Some other methods (Li et al., 2020; Dai et al., 2019) achieve an explicit symbol grounding in an abductive way, but still highly depend on a good initial model. Our proposed method falls into this category, but it not only explicitly models the symbol grounding, but also alleviates the sensitivity of the initial model by enabling the interaction between neural learning and symbolic reasoning.
id: 482052bc332674c90315cd94c991ba5b - page: 9
How to Retrieve?
# Search

curl -X POST "https://search.dria.co/hnsw/search" \
-H "x-api-key: <YOUR_API_KEY>" \
-H "Content-Type: application/json" \
-d '{"rerank": true, "top_n": 10, "contract_id": "4Es8cqUN-lCue2GPmCeS25Q9UObKubFsv30kg2KX4Qw", "query": "What is alexanDRIA library?"}'
        
# Query

curl -X POST "https://search.dria.co/hnsw/query" \
-H "x-api-key: <YOUR_API_KEY>" \
-H "Content-Type: application/json" \
-d '{"vector": [0.123, 0.5236], "top_n": 10, "contract_id": "4Es8cqUN-lCue2GPmCeS25Q9UObKubFsv30kg2KX4Qw", "level": 2}'