I have a general question about knowledge graphs. We're using GitHub copilot, and my coworkers are convinced that just letting it write its own markdown files and then somehow prompting it to read them is a substitute for structured knowledge. I'm trying to figure out how I can add some kind of supplemental implementation that proves that the knowledge graph can be a more reliable resource for particular things. I'm just not sure what the very smallest MVP step would look like.
You could ask them if they care if the get the right answers from that approach; many studies like this make it clear that the kind of process your colleagues are advocating for is only really viable for a use case where being wrong is ok. GenAI did not change the rules of data mgmt. Agents need good data as much or more than their human analyst counterparts. Would your business analysts be happy with some markdown files? I suspect not. They are used to enjoying the advantages a data mgmt and governance program affords them as they perform their job. Treating AI like a second class citizen will yield second class insights at best, and often incorrect ones that the AI is confidently wrong about.
I went to the new coworker store and they were all out of coworkers.