About Semantic Processors

This appendix assumes that you already have these semantic building blocks in place, and so focuses on how to manually configure semantic processors directly in the analysis pipeline.

Remind that the Analysis pipeline processing is made of several stages:

  • The Document processing, performed using Document Processors. See Appendix - Configure Document Processors.

  • The Semantic processing stage, performed using the Semantic Processors described in this appendix.

  • The mapping stage, consisting of mapping DocumentChunk and Semantic annotations to index fields.

See Also
The Analysis Pipeline Sequence of Processors
Configuring the Analysis Pipeline Manually
Appendix - Configure Semantic Processors
Appendix - Semantic Resources Reference

It involves a list of Semantic Processors, which process each DocumentChunk of each document sequentially, except those for which Semantic Processing is disabled in the mapping.

The Semantic Processing stage segments text into 'tokens' and then processes text as a flow of tokens. Semantic annotations are produced for each token.