| 169 | === With Bulk create ...=== |
| 170 | |
| 171 | Filling 3 years of slow data into the DB, should only take about 3 days now. That's feasible, I think. |
| 172 | |
| 173 | We skipped a couple of slow data services from inserting into the DB, because we didn't understand, what they mean. |
| 174 | Currently as a test, we inserted everything (apart from the services we skipped) from 01.04.2014 until 04.01.2015 into the DB. So it's not a full year. The size of the DB is about 400GB. |
| 175 | Currently we are running on a 500GB (actually only 450GB) SSD. |
| 176 | |
| 177 | Today I am moving the data to a 1.5TB normal disk and try to fill in everything from 2012 until now. I will go backwards in time ... so starting with data from 2015. And see how far I can go down. |
| 178 | |
| 179 | |
| 180 | == The Filler == |
| 181 | Currently the filler works on aux-files that were copied from La Palma to ETH. It does not care about what data has already been filled into the DB, but tries to fill it again (which fails, because of the "Time" key constraint) ... this is not very efficient. |
| 182 | |
| 183 | In the future the filler should a) be a constantly running thing, like the data logger, which fills the aux-DB in real time. But there should also be some way of "see if something is missing and re-fill it from the aux-file"-thing. In case the real time filler looses connection to the DB and stops writing to it. |
| 184 | |
| 185 | |