source: fact/tools/pyscripts/examples/result_collection/readme.1st@ 15063

Last change on this file since 15063 was 13113, checked in by neise, 13 years ago
initial commit of first thoughts about how to collect anaysis results
File size: 3.0 KB
Line 
1what is this about?
2
3Werner mentioned at some point, that we need some way to collect the results of
4our analysis transparently, while the analysis in on going.
5I do not mean only the analysis you all know, which aims
6for getting some theta^2 plot or some light curve, but I mean all different kinds
7of information, which might be used for understanding some features.
8
9Let me give a small example:
10Assume we want to anlyse some time of data, say 4 runs.
11Usually you have then the following files:
12 * N data files
13 * 1 pedestal file
14 * 1 LP file
15 * DRS amplitude calibration files
16 * DRS time calibration files
17 * several slow control files
18 * 1 closed shutter run, for gain analysis
19but depending on what kind of analysis you do, some if these files might miss.
20You might not even have physics data, since you analyse LP data at the moment.
21
22And using this comparably large and complicated number of files containing
23different information, you plan to retrieve different kinds of information
24such as:
25 * the behaviour of the baseline, using chunks of data of say 500 events
26 * the behaviour of the interleaved LP data
27 * image parameters from the physics events
28 - using the information from interleaved ped and LP events *and*
29 - using the information of the dedicated ped and LP runs
30 * ...
31
32And in order to start your analysis, you might apply some of the basic classes
33we currently have at hand.
34Assume you:
35 * apply DRS-amplitude calibration & spike removal
36 * do some sort of filtering of the data
37 * test some sort of signal extraction, which might need some
38 special information from the dedicated pedestal run for example.
39
40And assume you aim to test, how the signal extraction behaves, if the
41way you calculated the pedestal changes.
42I guess, at this point you would be glad to have some transparent way of
43treating all these intermediate results.
44And in the end, you might want to print some sort of
45 "what files and classes did I use, in order to generate this plot"
46
47If you could get this, without noting it in your laboratory notebook,
48but rather get it automatically, because the results, and the way you got them
49store themselves somewhere, I guess you would be glad.
50
51At the time beeing(14.03.12) we are a bit bound to do all these analyses apart from
52each other and using the results of say some baseline analysis steps is not yet
53included nicely into analysis steps, which might follow.
54
55And doing a special and maybe complicated analysis in several small steps,
56is not a bad thing. So it would be nice, to have some means of tracking what
57was done in the entire analysis, and also to have some way of collecting all the
58results and also intermediate results. Which are needed for debugging of course.
59
60
61So I made up my mind, and I think there is nothing which has to be written
62or developed. The normal standard dict class of python is all we need.
63It is simple to use. Flexible enough for storing whatever one likes.
64And can be stored to a file using the pickle and unpickle classes.
65
66
Note: See TracBrowser for help on using the repository browser.