0

Best Practices for Importing Historical Data from CSV Files

I’m planning to import a large number of CSV files containing historical data into our system. To ensure a smooth process, I’d like to understand the best practices for this import. Specifically, I want to ensure that the resulting HDB files do not exceed the 500MB threshold, that data is segregated by day, and that the system remains stable without crashing.

Please share your recommendations and any relevant guidelines to achieve these goals efficiently.

1 reply

null
    • JIMMY
    • 2 days ago
    • Reported - view

    I am not sure if this will really help, but in our case, I extracted almost 1,200 tags from PI.  Each tag was its own separate CSV file.  The largest file was approximately ~124 MB.  I dropped the first ten and that went quick.  Then, the next 100, and that was also quick.  I did not notice any system issue, so I dumped the rest in and they all processed pretty quickly and without any errors/issues.  Total size of data was about ~20 GB.  We have a pretty small volume/system, so it looks like it was able to handle our volume without any issues.

Content aside

print this pagePrint this page
  • 2 days agoLast active
  • 1Replies
  • 41Views
  • 3 Following