Thank you, Pascal H.. I think I need to reframe my question. I now understand that views provide a simplified form of an existing Ontology Design Pattern. And, that typecasting allows for the recoding of entities or their properties so they align as comparable objects (i.e. entities or units of analysis) for the purposes of, for example, a useful query. (This is my first foray into this lingo, but I've built plenty of ontologies, data models, and databases, so I hope I'm expressing my understanding in a way that is legible.)
My real question, though, goes to the following scenario: Imagine you're trying to construct a knowledge graph from scratch starting with a giant pile of unstructured digital documents with messy/unreliable or no meta-data. I know that in the social sciences and digital humanities this is a common issue. Researchers may attempt to bring order to the chaos using natural language processing techniques like named entity recognition, document clustering, topic modeling, etc. but they often find that these methods, while applicable at high scale, fail to identify the content of the documents in a way that satisfies their pre-existing ontologies (i.e. social theories). So they vacillate between classifying the documents and their content by hand according to their valid, complex theories vs. using computer aided approaches that work at scale but fail to reliable apply their complex theory/ontology to the unstructured texts. Is this also a problem/tradeoff experienced in the space of knowledge graph design and construction? (I appreciate you helping me calibrate my understanding.)