We present Temporal and Object Quantification Networks (TOQ-Nets), a new class of neuro-symbolic networks with a structural bias that enables them to learn to recognize complex relational-temporal events.
Our model includes reasoning layers that implement finite-domain quantification over objects and time. The structure allows them to generalize directly to input instances with varying numbers of objects in temporal sequences of varying lengths. We evaluate TOQ-Nets on input domains that require recognizing event-types in terms of complex temporal relational patterns. We demonstrate that TOQ-Nets can generalize from small amounts of data to scenarios containing more objects than were present during training and to temporal warpings of input sequences.
- Input Representation
- The input to a TOQ-Net is a tensor representation of the properties of all entities at each moment in time.
- Input Feature Extractor
- The first layer of a TOQ-Net extracts temporal features for each entity with an input feature extractor that focuses on entity features within a fixed and local time window.
- Relational Reasoning Layers
- Second, these temporal-relational features go through several relational reasoning layers, each of which performs linear transformations, sigmoid activation, and object quantification operations. The linear and sigmoid functions allow the network to realize learned Boolean logical functions, and the object quantification operators can realize quantifiers.
- Temporal Reasoning Layers
- The relational reasoning layers perform a final quantification, computing for each time step a set of a nullary features that are passed to the temporal reasoning layers. Each temporal reasoning layer performs linear transformations, sigmoid activation, and temporal quantification, allowing the model to realize a subset of linear temporal logic.
- Temporal and Object Quantification Networks in [PyTorch (Official, Coming Soon)].