Replies: 1 comment 2 replies
-
|
Thank you for the detailed context and questions. Here are a few suggestions that may help:
You can repeat this for different time windows until the full history is covered. This has two benefits: Each export job is smaller and less likely to hit resource limits or fail halfway. If one job fails, you only need to re-run that specific time range, not the whole 27 GB again.
Increasing the memory limit is the most effective way to avoid the container being OOM-killed during export. If possible, raising the memory limit (for example to 16–32 GB, depending on your VM capacity) will significantly reduce the risk of OOM and make the export more stable.
Yes — as long as your new writes do not use timestamps that overlap with the historical data you are importing, you can absolutely start writing to the new Greptime instance while gradually importing the old data. This approach is safe and commonly used when migrating from one instance to another without downtime. The only thing to avoid is writing new data that falls into the same timestamp ranges as the historical batches you are importing. If your system always writes strictly increasing timestamps, then you can migrate history “little by little” without any interruption. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Background
I have a standalone Greptime 0.17.2 instance running in Docker via Docker Compose. There area few other services on that vm, and I have caused some of them to crash when hitting Greptime with too large of a query. (I'm still figuring out how to not do that...)
Due to that, I thought moving Greptime over to a vm on its own would help ensure I don't take other things down.
At the same time I thought I'd set up data storage on an S3 compatible target. I think if I ever need to cluster, having the data on S3 already would help.
My problem is that the data export is really really slow. As in my metrics table is something like 27GB, and my export has only created 19GB worth of data in over 20 hours. The docs mention watching system resources, but not really anything on how to adjust things based on what you see.
I tried exporting a second table at the same time, but that caused the docker container to run out of resources and get killed.
This is inside our data center, so networking should not be an issue. I think we're on 10G at the slowest.
My target storage is an nfs mount with lots of space. I initially started exporting the data the wrong way and it was getting sent to the local container storage, and that export was moving just as slow. So the speed issue should not be caused by NFS, I hope.
I do have the docker service limited with
mem_limit: "8GB"andcpus: 4.0settings. Which I'm guessing might be where the problem is? Specificallydocker statstells me that the container is using well over 200% cpu, and is using up almost all of it's available RAM.Questions:
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions