does anyone have any session analysis tools?

daz

Remote viewer, author, artist and photographer.
Staff member
Does anyone use/or have any good session analysis tools or spreadsheets that I could use.

I currently do this manually, but this takes alot of time whereas a spreadsheet would be cool to have.

OR...

If you have any techniques / ideas for session analysis I would like to hear them.

daz
 

Rocheleh

New Member
I would just dump the data into SPSS (I suppose you already have a way of quantifying your session data) and fiddle around with it. If you are familiar with the basic concepts of statistics, and know what you want to do, it shouldn't be hard to use. If you are not, I would recommend reading an intro text... unfortunately I'm not familiar with English-language intro texts so I can't recommend one :( In the long term it's probably worth it. In the meantime, I'd like to know more about your way of quantifying your session data, do you use Buchanan's system or something else?
 
W

wizopeva

Guest
They only two official ways that I know of are:
-track record: viewer's practice session data over time is split into categories, and accuracy over time is recorded. Then that accuracy level is projected onto new session data. For instance, if a viewer is 90% accurate on colors in practice, it is assumed that in an ops target, any colors are 90% likely to be correct. One downside is that this takes a lot of commitment and resources to develop the initial data base, although one could still do a sort of informal notation of what viewers are good at what stuff over time.

-Comparing across sessions: Data is corroborated across sessions. For instance, if the viewer finds an animal in 3 out of 3 sessions, it's assumed there is likely an animal. Data can also be corroborated across diff viewers' sessions, but this works best if those viewer's have similar strengths and weaknesses. For instance, if one viewer is really bad at getting animals, he is unlikely to corroborate other viewers that are good at getting lifeforms. One downside is that much correct data may be not used if it's not corroborated. Sessions even within the same viewer can be wildly different yet still correct. Another issue is that simple common ideas are more likely to get corroborated, maybe sometimes just but chance. For instance, 3 viewers might say 'vertical' and that might not mean as much as if 3 viewers say 'jellolike' because jellowlike is a less commonlyused adjective.

I would bet that a system that incorporates both of these methods would be better than just one way alone.
-E
 
Top