I did a week long dowsing experiment maybe a year ago.  Much dowsing is done frontloaded, but as I'm from the RV model I do it double blind or solo blind of course.
I chose 50 binary dowsing targets.  Many of these did not have feedback; some were personal; some were objective with feedback.
I put the question (printed 'em off and cut 'em up) behind an index card in a security envelope and tucked the flap in.  I put them in a big cloth bag (sorta like a backpack) and shook them well.
A week later I shook 'em all up again, and then began.
One by one I would take out an envelope, focus on it, and then use a pendulum to get a response of yes, no, maybe, or 'ask differently/can't be answered'.
I used the method of beginning with the pendulum still and not looking at my hand, trying to relax, and letting it move until I could feel it moving enough to have an idea of clear swing direction, then I would look.
I would put the envelope in a pile depending on the answer, and move on to the next one.
When I had gone through them all, I opened each envelope, and on the index card, wrote down the date/time and response, and then put it back in the security envelope.
I shook them all up again.
A week later I did them all again, the same way.  A couple, I had changed the wording of the question, as they'd come up 'ask differently' and on consideration I decided they really were poorly phrased.
A week later, I did a final third run on all of them, and recorded it.
Then I sat down to look at each and consider the results.
Here are things I noticed about the experience, for whatever it's worth to others to hear about this stuff and maybe have their own experiments.
1.  I should have done one envelope at a time, and then opened it, and written down the result.  Not only would the 'feedback' have been far more direct and timely, but there is also the issue of 'degree'. ÂÂ
You would think a pendulum just swings for yes or no.  But in fact as you get used it, you find that it "feels sluggish at first, and then gives a 'reluctant' yes," or "jerked so hard it nearly took my eye out"  8) <-- safety glasses, lol!, or "started as yes but switched to no" or things like that.  I could not record the 'subtlety or degree of response' because of doing them all at once.
(Why did I do them all at once?  I didn't want to frontload myself, since the more I'd seen, the more I knew which ones were left.  The solution: much larger pool, do fewer at once, and whenever I write down the third-result on one, put it over in the 'done' pile.)
2.  I should have stuck to questions that all had real specific feedback.  As many of the answers varied, or had 'iffy-ness' in them in some way, it would have helped me make more sense of the responses.
3.  I should have written the questions better.  I should know better, but... humorously, a few I didn't pay close attention to, even when I got an answer, I wasn't sure what it meant, lol, as the phrasing could go either way or included more than one question or 'inferred' something else.
4.  I had several 'calibration' items in the pool.  They varied in response.  Some were wrong, some right, some had three different answers.  In the end, my assessment of the trial is inconclusive in part due to that.  I mean if more had been overwhelmingly correct, I would have had more faith in the others. ;-)
FWIW!
PJ
I chose 50 binary dowsing targets.  Many of these did not have feedback; some were personal; some were objective with feedback.
I put the question (printed 'em off and cut 'em up) behind an index card in a security envelope and tucked the flap in.  I put them in a big cloth bag (sorta like a backpack) and shook them well.
A week later I shook 'em all up again, and then began.
One by one I would take out an envelope, focus on it, and then use a pendulum to get a response of yes, no, maybe, or 'ask differently/can't be answered'.
I used the method of beginning with the pendulum still and not looking at my hand, trying to relax, and letting it move until I could feel it moving enough to have an idea of clear swing direction, then I would look.
I would put the envelope in a pile depending on the answer, and move on to the next one.
When I had gone through them all, I opened each envelope, and on the index card, wrote down the date/time and response, and then put it back in the security envelope.
I shook them all up again.
A week later I did them all again, the same way.  A couple, I had changed the wording of the question, as they'd come up 'ask differently' and on consideration I decided they really were poorly phrased.
A week later, I did a final third run on all of them, and recorded it.
Then I sat down to look at each and consider the results.
Here are things I noticed about the experience, for whatever it's worth to others to hear about this stuff and maybe have their own experiments.
1.  I should have done one envelope at a time, and then opened it, and written down the result.  Not only would the 'feedback' have been far more direct and timely, but there is also the issue of 'degree'. ÂÂ
You would think a pendulum just swings for yes or no.  But in fact as you get used it, you find that it "feels sluggish at first, and then gives a 'reluctant' yes," or "jerked so hard it nearly took my eye out"  8) <-- safety glasses, lol!, or "started as yes but switched to no" or things like that.  I could not record the 'subtlety or degree of response' because of doing them all at once.
(Why did I do them all at once?  I didn't want to frontload myself, since the more I'd seen, the more I knew which ones were left.  The solution: much larger pool, do fewer at once, and whenever I write down the third-result on one, put it over in the 'done' pile.)
2.  I should have stuck to questions that all had real specific feedback.  As many of the answers varied, or had 'iffy-ness' in them in some way, it would have helped me make more sense of the responses.
3.  I should have written the questions better.  I should know better, but... humorously, a few I didn't pay close attention to, even when I got an answer, I wasn't sure what it meant, lol, as the phrasing could go either way or included more than one question or 'inferred' something else.
4.  I had several 'calibration' items in the pool.  They varied in response.  Some were wrong, some right, some had three different answers.  In the end, my assessment of the trial is inconclusive in part due to that.  I mean if more had been overwhelmingly correct, I would have had more faith in the others. ;-)
FWIW!
PJ