the general steps for a KG pipeline is data extraction, data enrichment, linking and fusion (and data quality checks in various parts of the pipeline). How these steps are implemented (technically) is highly dependent on how you acquire and maintain your sources and if you need manual curation. e.g. creating a KG from a few static input sources can have a different approach from one that has input sources with high change frequency, But, overall, what you describe sounds correct