| 1 | what is this about? | 
|---|
| 2 |  | 
|---|
| 3 | Werner mentioned at some point, that we need some way to collect the results of | 
|---|
| 4 | our analysis transparently, while the analysis in on going. | 
|---|
| 5 | I do not mean only the analysis you all know, which aims | 
|---|
| 6 | for getting some theta^2 plot or some light curve, but I mean all different kinds | 
|---|
| 7 | of information, which might be used for understanding some features. | 
|---|
| 8 |  | 
|---|
| 9 | Let me give a small example: | 
|---|
| 10 | Assume we want to anlyse some time of data, say 4 runs. | 
|---|
| 11 | Usually you have then the following files: | 
|---|
| 12 | * N data files | 
|---|
| 13 | * 1 pedestal file | 
|---|
| 14 | * 1 LP file | 
|---|
| 15 | * DRS amplitude calibration files | 
|---|
| 16 | * DRS time calibration files | 
|---|
| 17 | * several slow control files | 
|---|
| 18 | * 1 closed shutter run, for gain analysis | 
|---|
| 19 | but depending on what kind of analysis you do, some if these files might miss. | 
|---|
| 20 | You might not even have physics data, since you analyse LP data at the moment. | 
|---|
| 21 |  | 
|---|
| 22 | And using this comparably large and complicated number of files containing | 
|---|
| 23 | different information, you plan to retrieve different kinds of information | 
|---|
| 24 | such as: | 
|---|
| 25 | * the behaviour of the baseline, using chunks of data of say 500 events | 
|---|
| 26 | * the behaviour of the interleaved LP data | 
|---|
| 27 | * image parameters from the physics events | 
|---|
| 28 | - using the information from interleaved ped and LP events *and* | 
|---|
| 29 | - using the information of the dedicated ped and LP runs | 
|---|
| 30 | * ... | 
|---|
| 31 |  | 
|---|
| 32 | And in order to start your analysis, you might apply some of the basic classes | 
|---|
| 33 | we currently have at hand. | 
|---|
| 34 | Assume you: | 
|---|
| 35 | * apply DRS-amplitude calibration & spike removal | 
|---|
| 36 | * do some sort of filtering of the data | 
|---|
| 37 | * test some sort of signal extraction, which might need some | 
|---|
| 38 | special information from the dedicated pedestal run for example. | 
|---|
| 39 |  | 
|---|
| 40 | And assume you aim to test, how the signal extraction behaves, if the | 
|---|
| 41 | way you calculated the pedestal changes. | 
|---|
| 42 | I guess, at this point you would be glad to have some transparent way of | 
|---|
| 43 | treating all these intermediate results. | 
|---|
| 44 | And in the end, you might want to print some sort of | 
|---|
| 45 | "what files and classes did I use, in order to generate this plot" | 
|---|
| 46 |  | 
|---|
| 47 | If you could get this, without noting it in your laboratory notebook, | 
|---|
| 48 | but rather get it automatically, because the results, and the way you got them | 
|---|
| 49 | store themselves somewhere, I guess you would be glad. | 
|---|
| 50 |  | 
|---|
| 51 | At the time beeing(14.03.12) we are a bit bound to do all these analyses apart from | 
|---|
| 52 | each other and using the results of say some baseline analysis steps is not yet | 
|---|
| 53 | included nicely into analysis steps, which might follow. | 
|---|
| 54 |  | 
|---|
| 55 | And doing a special and maybe complicated analysis in several small steps, | 
|---|
| 56 | is not a bad thing. So it would be nice, to have some means of tracking what | 
|---|
| 57 | was done in the entire analysis, and also to have some way of collecting all the | 
|---|
| 58 | results and also intermediate results. Which are needed for debugging of course. | 
|---|
| 59 |  | 
|---|
| 60 |  | 
|---|
| 61 | So I made up my mind, and I think there is nothing which has to be written | 
|---|
| 62 | or developed. The normal standard dict class of python is all we need. | 
|---|
| 63 | It is simple to use. Flexible enough for storing whatever one likes. | 
|---|
| 64 | And can be stored to a file using the pickle and unpickle classes. | 
|---|
| 65 |  | 
|---|
| 66 |  | 
|---|