-- The Dreaded 'Displacement' Problem --

katzenhai2

Ambassador
NewGuy said:
we are instructed to target the actual session and its results to gain a better understanding of why the displacement occurs. In my brief experience in doing so, three sessions on targeting the session's false outcome, I had gotten a solid, 'take it to the bank' result. One session being front loaded, and the other two tasked blind.
You were instructed to target the "cause" of the actual HARV session result and that couldn't lead to an answer like "take it to the bank". :)
 

Don

New Member
This is a fascinating discussion. However, I haven't been involved in any ARV work for many years (and never had a great deal of luck with it in the first place), so much of the context and a few of the terms being used here are lost on me.

One quick thing I wanted to mention: Gene, I think you were absolutely correct when you made the following comment:

The reason I remain obsessed with this to some degree is I think the solution to displacement has the potential to explain much of the actual mechanics and inner functioning of RV. I mean not just theory but real knowledge and proof of how this stuff works and why.
I agree. The answers to this and a few other problems in remote viewing might really open up the doors to our understanding of how PSI operates in general.

Every ARV project I have been involved in required either the viewer, or tasker seeing both photos at some point during the project. I might try the Mcmoneagle/May approach soon to see if this makes a difference.
As I mentioned in a previous post, I always thought it was common practice for the RVer to never see the incorrect potential target (meaning the potential target that is associated with the outcome that doesn't come to pass). But - and please forgive my ignorance here - how is the ARV problem set-up without the tasker seeing both potential targets? Or without the analyst seeing both potential targets? To set-up the target-outcome associations, doesn't the tasker have to see both potential targets? And doesn't the analyst have to see them both in order to judge which target has been described by the viewer? As I said, I've been away from ARV for quite some time, so I appreciate it if one of you guys could clarify that for me.

Quote from: PJ
...this tasking idea years ago I set up in a project called Risk Intuit that is 95% done and I might get around to finishing it someday... basically the original task generation (which can be automated or intentional from the PM) selects options that are "potential concepts." The ACTUAL target is the one selected by the PM after the feedback is in. But the selection is done based on the 'winning' "concept."

So all the judge for the session sees is the concept which is like: Lake, or Dancer, or children, or skyscraper, or waterfall, or whatever. The idea is, a) any decent session ought to be able to at least make it clear between those, if not it's a pass, and b) part of the intent of course is for them to describe the elements of the ACTUAL target which of course would reflect the primary concept it was based upon.

In this protocol the PM does not see the sessions btw only judges do. So when outcome arrives, the PM takes the 'option concept' which was accurate, and then goes and selects a "specific instance" of that concept in our reality via google or whatever. So there is never any such thing as another target with details and a photo; there is only another 'concept', generically, and it's intentionally quite different. The only 'target feedback photo' selected is the real one.
I think I understand the above-described concept - by PJ - as a way to prevent the tasker (whom I believe she is referring to as the "PM") from seeing anything but the correct target. However, if I am understanding this correctly, the tasker (or PM) still creates several
that are potential targets - all except one of which will be incorrect potential targets. Is this correct? If I am understanding this idea correctly, isn't the only difference between this method and standard ARV that the potential targets are concepts, rather than actual photos or locations (as are usually used for potential targets?) The only difference is "target feedback photo" versus "concept". Correct?

Another quick question:

If you have the time, I was also hoping you could express your opinion on whether or not it is commonplace for people in the CRV method to target the erroneous session results with another session to find out what went wrong?
Has anyone ever tried this and, if so, what were your results? This seems like an extremely difficult thing to do.
Thanks, Don.
 

daz

Remote viewer, author, artist and photographer.
Staff member
one of the problems we have here is noise.
before Ingo created his CRV it was recognized that the psi process has a 80% noise - 20% signal ratio.
With his CRV, he and Hal claimed to have reversed this to 80% signal - 20% noise. Whether this has been adequately tested on not has and is often debated. All I can say is my rv which is heavily CRV based in above 80% signal to noise ratio (most of the time).

Arv is just Rv and to get increasingly accurate results - you still need to achieve a high signal to noise ratio - even before you start looking at ARV style approaches at getting even better results. for example in the viewers being used -what is their long term appraised signal to noise ratios? Shouldn't this be know before trying to increase rv accuracy through other means?

To me, it seems that not enough is known about the long term rv signal/noise ratios from the viewing participants to even think about trying to improve using differing methods - because how can you measure a change in accuracy if you have no ratio baselines for each viewer to then measure any changes in quality and if this has come from a style change in an ARV process?
 
Daz,

Here’s how I see it…

one of the problems we have here is noise.
before Ingo created his CRV it was recognized that the psi process has a 80% noise - 20% signal ratio.
That’s not my impression of what’s been said about psi by a number of researchers. What sources do you have in mind for that?

With his CRV, he and Hal claimed to have reversed this to 80% signal - 20% noise. Whether this has been adequately tested on not has and is often debated. All I can say is my rv which is heavily CRV based in above 80% signal to noise ratio (most of the time).
And you are not only claiming it, but much more importantly showing it – more power to you!

Arv is just Rv and to get increasingly accurate results - you still need to achieve a high signal to noise ratio - even before you start looking at ARV style approaches at getting even better results. for example in the viewers being used -what is their long term appraised signal to noise ratios? Shouldn't this be know before trying to increase rv accuracy through other means?
I believe you are talking about databasing results whereby viewer accuracies in comparison with inaccuracies in the session data can be measured - granularly. First, as we’ve all often discussed, few in the field database their RV session data. Those that do don’t display much (if anything) about it in public, so little concrete is actually known from the few groups that do it – and claim it is basic to their work for clients. (If this is no longer the case, and such databasing is being shown, I’ll stand corrected.) Also, some viewers get symbolic data, which is difficult or perhaps impossible to adequately database. Further, in practical RV and ARV work, sometimes all you need is one or two significant databits to get the answer for the client or to make a correct ARV prediction. Finally, sketches can be an important part of either regular RV or ARV, yet they too are difficult to database. All of which argue against the assertion that measuring the accuracy/inaccuracy of each and every data bit in a session is a requirement or some kind of prerequisite.

To me, it seems that not enough is known about the long term rv signal/noise ratios from the viewing participants to even think about trying to improve using differing methods - because how can you measure a change in accuracy if you have no ratio baselines for each viewer to then measure any changes in quality and if this has come from a style change in an ARV process?
But what you do know (or ascertain over time) in ARV is the rate of successful picks/predictions by a solo viewer or by a group. Unless the participant(s) is modifying his preparation and viewing in major ways, one would expect that the rate of successful picks would likely remain fairly constant, other things being equal. That is, for example, if adequate breaks are taken in doing series of ARV. If adequate breaks are not taken, experience indicates the rate of success is very likely to fall. And if the rate of success improves with the introduction of a new method, and no other changes of note have been made, one can attribute it to…the new method - once enough events have been undertaken.

To summarize, plenty of successful RV work and ARV work has been done without knowing the precise accuracy-inaccuracy ratios in each viewer’s data output (to the extent this is even obtainable). I am not at all against databasing, either of the Lyn Buchanan MS Access or Alexis Poquiz ACEM/Dung Beetle variety – just saying in reply that one can make progress and do successful work without it.

Jon
 

Don

New Member
Insert Quote
one of the problems we have here is noise.
Absolutely. That's the constant issue in all PSI functioning. Hal and Ingo did an excellent job of locating precisely where most of the noise originates - interference from the conscious mind and faulty interpretation of raw perceptions.

Arv is just Rv and to get increasingly accurate results - you still need to achieve a high signal to noise ratio - even before you start looking at ARV style approaches at getting even better results.
I agree with the statement: "you still need to achieve a high signal to noise ratio". That's true for all RV. But the practice of ARV involves complications to an already barely-understood process. To me, that implies a need for an even higher signal-to-noise ratio than common RV calls for - in order to be consistently successful.

On the surface, it may seem that all that is needed is a description that is accurate and detailed enough to differentiate between the two potential targets (in binary ARV). But, given the potential for fracturing and diffusion of intent due to the complications in the targeting and tasking aspects that are an inherent element of the ARV process, I agree with Daz that consistently accurate viewers are required.

But I can't completely agree with Daz that "ARV is just RV". While the specific PSI function might be (and probably is) the same under both protocols, the discreet PSI function is only one aspect of a process that is made up of many functions - and may not be the most important function at that. Other functions, such as target criteria and selection, tasking and cueing, feedback , etc., are quite different (and contain the potential for more, possibly unintentional, feedback loops) and are based on differing priorities in ARV than in common RV. And therein may lie the source of many of the problems people seem to have with ARV, especially that of displacement.

I'm not sure of the relationship between the specific issue we are discussing here - displacement - and the issue of signal-to-noise ratio. I only say this because of the instances of extreme displacement that I've seen - where the the viewer gives an almost perfect, otherwise noise-free, description of the incorrect potential target. The excellent description of the incorrect potential target seems to rule out noise as being the issue And although I suppose you could term that "noise" of a kind, it is not "noise" as the term is usually applied.

Daz, I have a question regarding your 80%to 20% stats. Are you referring to the accuracy of each reported perception within each session? If so, those are excellent numbers. Or are you referring to accuracy overall? As Joe M. has claimed an accuracy of around 65-75% over time, if so, then your numbers are even more exciting.

Overall, I agree with Daz. If the viewer is not consistently successful in common remote viewing efforts, attempting changes in the ARV protocols to increase accuracy seems most likely a waste of time.

Don
 
Don,

If the viewer is not consistently successful in common remote viewing efforts, attempting changes in the ARV protocols to increase accuracy seems most likely a waste of time.
I'd just like to say that in the Applied Precognition Project there are many individuals involved (the discussion list is up to 100 people), some of whom are new to RV or ARV - all are invited to take part regardless of their experience. Getting new people involved is a good thing - and this is a reasonable way to do it. Generally they will be doing sessions using simpler techniques that do not require extensive training. Their data bit rate of success is not known at the outset and is not being measured as the group they are in goes on. (Except that Alexis has developed software that attempts to do this -for any group that wishes to do so.) Each participant can choose which group and/or method they want. They can also start their own group. One group is open to all methods - including dowsing, scrying, finger testing, pendulum - whatever you want. A wide variety of methods - and viewers - then are being tested throughout the APP. Eventually, we may decide to focus on certain methods, possibly combined with certain viewers, based on past results.

The latest method - and one that many want to take part in - is the Computer Assisted Scoring (CAS) method - of Ed May and associates. This method does have a track record, and an excellent one. I ran one trial with it in which 6 viewers took part - one was quite new to RV. She got one of the 4 hits we had. (All the picks were hits but there were 16 passes.) So just to say that these are the practical circumstances for the APP and it is going quite well.

Jon
 

Don

New Member
Jon K.

Getting new people involved is a good thing - and this is a reasonable way to do it.
I agree. Anything that introduces new people to RV, and simultaneously teaches them the importance of blinding protocols, is a very good thing. I don't know anything about APP. And I'm definitely interested. It sounds exciting and I'd like to know more.

I ran one trial with it in which 6 viewers took part - one was quite new to RV. She got one of the 4 hits we had.
That's not surprising to me. I think native talent is a huge factor in anyone's RV success. Plus, there's that "beginner's luck" thing. Personally, back in 1998 when I started RVing, I experienced amazing accuracy in my first couple RV sessions. Then I experienced a nose-dive in my results for a couple months before finally improving again and stabilizing somewhat.


Also, some viewers get symbolic data, which is difficult or perhaps impossible to adequately database. Further, in practical RV and ARV work, sometimes all you need is one or two significant databits to get the answer for the client or to make a correct ARV prediction. Finally, sketches can be an important part of either regular RV or ARV, yet they too are difficult to database.

I agree with this as well. Sketching is a huge part of my method, much more so than written data. An accurate sketch of a target makes a big impact.

I recall, about 10-12 years ago, one of the PSI-COP debunkers was complaining - in the face of some amazingly accurate RV work that was mostly comprised of sketches - that the judging was all subjective. He said that, even if the RVer seemed astoundingly accurate, it was not scientific for this reason (even though the judging was done blind by otherwise non-involved judges). This assertion seemed ludicrous to me. It flies in the face of common sense. But that's the nature of sketches. I'd guess this was the reason RV researchers began to rely on data-bits. The problem is that this form of judging works much better with written data than with sketches. But to me, from a purely human perspective, a good sketch speaks volumes. Don
 
Hi Don,

Here's the link to the home page of the APP: http://appliedprecog.org/

The three principals are Marty Rosenblatt, Chris Georges and Tom Atwater. APP is an LLC registered in the state of Nevada. It's also a democratically oriented remote viewing group where "the viewer is in charge" - that's part of the philosophy. Although there are c.100 people on the distribution list, there may be "only" 20-30 viewers active at any one time in 7 to 8 groups. There is no charge for anyone to join an existing group or start their own group. ARV is the focus.

Like you, I've been doing RV for many years now and have been part of three substantial group efforts - TDS, Phoenix/Aurora and now APP. In the APP people get along very well, the vibes are good and a variety of approaches to ARV are being put to the test - and extensively databased. Within the last year APP has benefited from the input of Joe McMoneagle and Ed May and the software they use is being made ready for further use in APP. Tom Atwater is in charge of the stats, along with Alexis (as I understand it).

I'm very happy to be part of such a productive group effort.

As you may have seen, the APP is holding its second conference this June in Las Vegas. Joe McMoneagle is coming again and Skip Atwater and Marty will be the lead people for the gathering. I'll post again soon about this another section on TKR or one can take a look on the APP site for details.

Jon
 

Don

New Member
Jon,
Cool. Thanks for the information about APP. I'm very interested. Although I've been RVing for a long time, I have almost no experience working in a group. Back when I started, I believe Ed Dames' TRV (followed quickly by Paul Smith and Lyn Buchannon's programs) was the only training program available. Not having the funds to pursue TRV, I read everything I could get my hands on - especially Joe's book "Mind Trek " - and began doing a self-taught, meditative method. With some pointers from Joe, I was able to develop something that works.

Since my wife passed away a few years ago, I've been without a consistent tasker. As I firmly believe that remote viewing is a group effort, with every person's role being just as important as the viewer's role, I've really missed the input of others. In addition, since I've never done a lot of ARVing (maybe a total of a couple hundred sessions over the years), APP sounds very exciting.

I like Marty Rosenblatt. I met him some years ago at an ARV workshop that was held at the Monroe Institute. The workshop was held by Marty and Skip Atwater.

I'm also interested in this judging program, created by Ed May, that you've mentioned several times. The idea of a computer program judging RV transcripts - as opposed to the subjectivity of a human judge - is intriguing. Thanks again! Don
 
Don,

Glad to hear you're interested - if you'd like to learn more about APP or want to try a group, I suggest getting in touch with Marty.

The idea of a computer program judging RV transcripts - as opposed to the subjectivity of a human judge - is intriguing.
To be clear, the computer performs a function something like judging, if not judging as we ordinarily think of it. What happens is: A set of 300 photos was laboriously selected. The set is divided into categories. Each photo was rated as to the change in the Shannon entropy in it - don't ask me how. :) Then a group (of humans) was asked to what extent is each of x number of categories present in the photo. (The number of categories and what is in each category is best kept from the viewers so I won't mention it here.) To determine the extent, the rater assigned a score from 0.0 to 1.0 for each category for each image. Thus caves (not an actual category) are depicted in the session, the rater thought, maybe .5. The Moon (not an actual category) got a 0. Darkness (not an actual category) got a .9. In this way each photo was given a profile.

Now, when putting the software to use:
The computer selects two photos randomly from orthogonal (widely differing) subsets.

The viewer does a session. The analyst rates the session for each category - without seeing either photo. In fact, the computer can choose the photos after the profile for the session is submitted.

Then the computer compares the profile the analyst gave to the session with its profile for each of the two selected photos. If the computer awards a score of at least .4519 (reliability times accuracy) to the session, as a match for one of the two photos, then we have a basis to choose one of the two outcomes that were predetermined by the equivalent of a TRN. However, it is rare to have a score that high.

It took many man-years to devise the software and it sounds complex, but it's pretty easy to use in practice. And has a very high rate of success, albeit with a high percentage of passes (c. 70+%.) Ed May has published several articles about the development of this software.

Some subjectivity remains: it is in the rating of each of the categories by the analyst for a given session.

Jon
 

Don

New Member
Wow! Thanks, Jon.

Your post answered many of the questions I had about the way people are doing ARV these days. For example, for the life of me, I couldn't figure out how an ARV project can be set-up without the tasker and the analyst seeing both potential targets. As you mentioned, with Ed's software, there is still some subjectivity, but it has really been minimized to a large degree.

(The number of categories and what is in each category is best kept from the viewers so I won't mention it here.)
Thanks for that. I plan on taking part.

the equivalent of a TRN
Excuse my ignorance, but what is meant by "TRN"?

Does Ed's program seem to have any impact on that cursed, ever-present displacement problem? You mentioned that it has a really good track record, so I would assume that it does.

Each photo was rated as to the change in the Shannon entropy in it - don't ask me how.
I remember, some years back, Joe telling me that not only did the degree of entropy in a target seem to impact how easily that target is remote viewed, but that they ("they" meaning Joe and Ed and possibly others, I suppose) had come up with a way to measure the degree of Shannon entropy extant across a target photo image. That they can do this, and how they do this, still baffles me.

Ed May has published several articles about the development of this software.
I'm going to search for those articles. Well, maybe I shouldn't if it is best for the RVer not to know about the various categories.

Thanks for the explanation, Jon. I really appreciate it. I'm going to contact Marty and will most likely be taking part soon. Don.
 
That's great, Don! I look forward to seeing you on the lists at APP.

Ed May's software is just one method that APP has used. We aren't currently using the software due to some technical issues, but any day now we will again be. There are about 8 groups and several different methods are in use in APP. Marty and I are the main contact people for use of Ed May's software in the APP. Ed May is not formally associated with APP. He is generously allowing us to use it, though. The name of the software is CAS (Computer Assisted Software). Please email me if you want to discuss CAS more. Marty has info on all the other groups that are currently running if you'd prefer to be in one of them. (Some of us view for 2 or even 3 groups.)

TRN is the Target Reference Number. We used to call it a Tag in TDS. It's the arbitrary 6 to 8 digit alphanumeric used for bookkeeping purposes.

Yes, Ed May and Joe McMoneagle say that by using this software feedback loops are reduced, if not eliminated, and hence displacement is too. Except that since no one ever sees the photo that did not "actualize" for an event, one can't be sure. If the computer awards two high scores in matching the scoring profile of the session with each photo, though, then one could perhaps say displacement has occurred. However, it appears that two such high scores seldom occur.

The main watchword of Joe and Ed at the APP Conference last June was - Reduce Feedback Loops!!! They feel that a lot of ARV is done wrong because practitioners create a lot of such loops. Even betting (by the viewer) helps form such a loop, they say. However Joe McMoneagle did not ask us to adhere to this stricture when he ran a few ARV ball game picks at the APP Conference.

Jon
 

Mycroft

Active Member
Jon K said:
The main watchword of Joe and Ed at the APP Conference last June was - Reduce Feedback Loops!!!
Yes, watch what is passed around in your feedback email too. :D

This one was hilarious happened recently to me. I did a session with extremely strong signal line I named (oops) and or described two completely disparate objects along with a third object. I got the feedback and I was like totally off, like I had been on two different planets at the same time. Then it happened, one of the viewers replied all about an object they were working on and the other replied all announcing an event.

OMG! ;D I was red in the face after than one. Being the people pleaser that I am I had named or described both terms that had been passed back and forth in the feedback email! I had nailed all three items. But it wasn't the feedback it was what had been in the notes passed back and forth. I showed them what had happened. A week later I myself made the same blunder.

Segregate your feedback from your chat list! Ha ha!

Live and learn, like I said I did it myself in a subsequent week, it is easy to do.

Mycroft
 

Don

New Member
Being the people pleaser that I am I had named or described both terms that had been passed back and forth in the feedback email! I had nailed all three items.
Yes, this kind of thing happens. Much like my describing (remote viewing of..) things that are on the TV while I'm checking my feedback photo. It seems that anything that is strongly associated with the viewing or with the feedback experiences becomes part of the whole remote viewing experience, leading to displacement. I think a big part of the problem has to do with what we are willing to accept. This can be difficult, because anything that is described in detail shows a strong PSI response and that can be exciting and, at some level, we are "happy" with that.

Please email me if you want to discuss CAS more. Marty has info on all the other groups that are currently running if you'd prefer to be in one of them. (Some of us view for 2 or even 3 groups.)
Thanks Jon. I will - probably next week sometime as work has me swamped at present. I'm looking forward to participating. Don
 

tbone

Active Member
It seems to me that there is a "sweet spot" on how deep to go into a session to avoid displacement. I have found (anecdotaly at least) that if I go too deep or try to get too much detail, I start to displace. I just do a short session, between 5 and 10 minutes, and just do 3 stages. Instead of the 1 ideogram that Dames teaches, I do 3 or 4 as I have seen some others do. I don't probe them or write my impressions about them, I find that distracting. I just memorize my perceptions and get a general overview. Stage 3 is where I really see examples of displacement if I go too deep. I don't just use archetypes, I draw a general impression of what I perceived from the ideograms along with some Stage 2 descriptors with maybe an archetype or two thrown in. What I find is that when my quick sketch is complete and I try to go deeper and go back for more detail I almost invariably draw an element of wrong target. I don't know if this will be helpful to anyone, but that is my 2 cents.
 

sharp

New Member
I have noticed this as well tbone. The earlier data for me is much more trustworthy, and later probes that yield vivid but very different aspects of a target, are more often than not displaced. It is certainly an art, a feel, and not so much a science. But it sure is fun.

Regards
 
Top