Loading Data from Iceberg Tables into Nebula #5902
Unanswered
john-thoma
asked this question in
Q&A
Replies: 1 comment
-
NebulaGraph importer is a binary in go to load CSV in NebulaGraph and comes with a yaml configuration to define how the loading was mapped. NebulaGraph exchange is a spark app that supports bunch of different data sources/hive is one of them. https://docs.nebula-graph.io/3.8.0/import-export/use-importer/ |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi everyone,
I'm currently working on a usecase to transfer data from Iceberg tables into Nebula Graph. My current approach involves using Parquet as an intermediate format. Writing data directly to CSV from Iceberg tables consumes a large amount of storage and seems inefficient for distributed loading.
I'd appreciate any insights from those who have experience with similar data loading processes. Specifically, I'm interested in strategies for efficiently distributing and loading large volumes of Iceberg table data into Nebula.
Additionally, I'm curious if there is a way to load data from the tables in a Hive catalog directly into Nebula. If such a method exists, it would be perfect for my needs.
Any suggestions or best practices would be greatly appreciated. I'm open to discussions and eager to learn from the community's experiences.
Thank you!
Beta Was this translation helpful? Give feedback.
All reactions