Increasing win rate by using trials with added resistance

Hey it’s been a while, I’ve tried to move on from MMI and this type of stuff in general but it seems I keep coming back in some way.

After practicing quite a bit using the METrainer, let’s say I did reach quite the level.

I’m also able to quickly switch between target high and low.

There’s a few things that still need to be learned. One of which is the negative effect of the smaller ball still (playing without “Hide Miss Ball”) affects the win rate extremely negatively back to around 50%. It takes an enormous amount of effort to get it higher compared to simply turning it off. Then again I simply haven’t put in the time to fix this issue.

To add, when playing with “Hide Miss Ball” I can also keep target high and pretend its target low. Meaning that the larger ball is a miss and the empty space is a win.

But for higher hit rates I feel that there must be a next level or difficulty increase possible.

Which gave me the idea of weighted trials or trials with added resistance.

The idea here is that we try to increase the difficulty of trials, which mirrors that of a progressive overload system.

We can use the bench press or pullups as physical excersises as an example. If you are able to do one set of 3-4 pullups. Performing more pullups a day makes it so that you eventually can do a set of 5-10 pullups in succession. Its the same for the bench press. You add weight every week, which makes it so that you can do the excersise with more weight over time. You could say that you’d press 30 kg today, by adding weight 2.5 - 5 kg every week eventually you should be able to grow to let’s say 50 kg. By adding weight you can easily progress linearly with a method.

But in the case of pullups increasing resistance becomes a bit harder, eventually its better to start adding “weight” as increasing reps doesn’t really increase strength at a certain point and it becomes harder to progress to new levels of higher reps. So we add a backpack to increase resistance. If you are able to do 10 pullups in succession, you start doing weighted pullups and then if you can do 10 weighted pullups 10 normal pullups feel a lot easier.

In the same way I was thinking that by adding resistance in trials, it should make it easier to “win” on base levels.

To do this we simply add the probability treshold to each trial using p = 1 – (cumulative normal distribution function at z ) from another thread.

Level 1 would be: p > 0.50.
Level 5 would be: p > 0.45.
Level 10 would be: p > 0.40.
Level 25 would be: p > 0.25.
Level 40 would be: p > 0.10.

When focusing on more ones than zeros, we simply add a treshhold. If it doesn’t pass that treshhold it counts as a loss.

In practical terms it would be adding a variable containing score. Score can’t go below 0. If a win occurs we add 1 to score. If a loss occurs we subtract 1 from score. If score has reached 10 we progress to the next level and reset the score to 0.

I guess I’d like some guidance on where to go next, and would like to hear your thoughts Scott. Have you ever considered anything like this?

Editing the METrainer seems quite complex, so I was thinking off building another application entirely.

Hi David,
Thanks for your ideas and your detailed post.

The METrainer has a variety of feedback modalities, allowing the user to mix and match and select what works best for them. I should emphasize that the user will achieve the highest hit rates when receiving feedback that connects.

Unfortunately, with ever-changing operating systems and browsers, the METrainer no longer works on most computers, but that’s another issue.

Sounds like you have practiced quite a bit with good result, and I would love to see some of your specific results.

With respect to your “resistance” idea: your suggestion indicates you have realized that achieving and increasing significant results requires very much the same type of motivation and practice necessary for physical sports or activities. I have found this to be correct. On the other hand, the METrainer allows testing up to extremely high levels of statistical significance - levels no one I know of has ever achieved. The best I ever got was a p value of just above 0.000001 (ME Score of almost 20) is a 10 minute session. To increase difficulty, try Continuous mode versus user-initiated trials. I estimate it’s about 5 times harder to reach the same levels versus with initiated trials. Since I developed the Trainer many years ago, a lot of progress has been made, both in hardware and processing (see Advanced Processing Mehtods).

The METrainer also includes the PCQNG (a software-enabled TRNG) as an Entropy Source. This type of local entropy source would be useful for many applications and games, and several forum members have worked on some approaches, as well as some advanced processing methods in practice applications. All-in-all, a new generation of trainer would be desirable.

Heya @David. Nice to seeya back here after a few years. (For those of you who don’t know David he’s OG and was the one who helped setup the Discourse software on this forum.)

I like the idea of progressive weighted training - the gym analogy works well. And fits in well with the concept of us thinking of our “psi muscle” - some people are buff as, most of are just weak :slight_smile:

@fluidfcs1 went over lots of Scott’s papers and made a Python version that works like the ME Trainer. You might find that an easier one to work with. Plus he was looking for feedback and checking of his implementation of the papers’ algos and stuff.

Sharing MMI program demo/code - (Python ME Trainer) is the thread

Hey Scott,

With continuous mode are you talking about File → Auto Test With Log?

It says ver 1.0 in the window, is this the correct and latest version of the METrainer?

When using the program, most of it has been done using PCQNG and PRNG despite having 2 devices laying here: The MED100k and the MED100kx3.

The idea was that getting a higher hit rate on PCQNG and especially PRNG makes it easier to get it on the MED devices.

For results, from the logs, if the first row is a 1 that suggests a win or hit from the help file. I’ve changed all values that are 0 to -1 and created a few diagram’s of some of the logs.
Not sure if it shows anything too crazy, these are long sessions and I’m not focused at all the entire duration. Only certain times am I focused on winning, otherwise its just a bit of clicking. Also switching between modes and RNG’s. For some of the graphs it might be the case, not sure anymore as I took some logs from different dates, that I started focussing on the negative result (empty space) without changing the settings. The line is the combination of 1’s and -1’s. The bottom side is the amount of trials.

After reaching let’s say a win rate of 0.6 in around 100 trials or so it also simply will inevitably go down to 0.5. That’s when I’d click reset or I reset at a certain point anyway.

Don’t want to overestimate my abilities, so talking about the results can be a bit on the lower side. But nonetheless I’m really happy with these results already.

For how the results feel: When opening the program, after turning off the settings I don’t like, can just spam click (5+ cliks per second) and it will move to the direction I want. Around 1 - 4 clicks a second (my normal play speed) and the score would go to around 0.50 - 0.6+ hit rate 50-100+ trials. Slower clicks 1 a second doesn’t always neccessarily increase my win rate. I start doing this when the ball slows down and starts moving in the opposite direction, despite me wanting it to move further, there’s a bit of a what feels like a tug of war as my effort increases and
the clicks slow down. If we are talking about scores, I’d say I can get a consistent 6-10 score within minutes of opening the program.

Using the METrainer I’ve tried all kinds of methods, (extra) physical tension, visualizing green or lush fields or a ball rolling. (I changed the large ball to the green smiley ball in the image folder, because I’d associate the color green with a win).

To be honest though, don’t like the visualization aspect and am too lazy to put much work into it.

Counting wins has been my most used technique, I’d say the wins in my mind after they occur, somethimes before they occur (to force try and influence it but mostly after): “One, two, three”, (followed by a loss here), “one”, (loss), “one”, “one, two”, (loss) “one”, (loss) “one, two, three, four, five”.

But it feels this is all falling away, now I just click with a bit of physical effort here and there I think.

Despite all these efforts not every round makes it to 0.6 hit rate in 100 trials. A round is after clicking “reset stats”.

For the higher levels, haven’t focused on getting higher levels 15+ and instead tried focussing on consistency. Of course it happens every now and then.

What would you like to see in a new trainer, would it still consist of rolling a ball? Do you have any additional idea’s?

Very interesting! Seems like there has been quite some interesting progress, who knows maybe MMI will still become quite the thing in the next few years :smiley:

Will defintely take a look!

Hi David,

It’s been a while since I used METrainer, so I had forgotten to get continuous mode just requires holding down a key, such as the space bar. The program will take data continuously at it’s max rate (about 5 trials per second) and stop when the key is released. Yes, theoretically you can make your settings and start the continuous testing program. This program will run until it is stopped, which may be a little awkward for starting and stopping.

I believe ver 1.0 is the latest version.

My experience is consistent with yours: The PRNG source is the hardest, PCQNG is easier and the external hardware (MED100Kxx) is the easiest. MED100kx3 is the better of the two (3 x 128MHz internal generation rate).

Your plots show on the y axis, the number of hits minus the number of misses (n1-n0); and on the x axis, the number of trials (n). The z-score is calculated as (n1-n0)/Sqrt[n]. At a glance, it seems all 6 of these plots shows statistically significant results, i.e., p < 0.05, with most results having p < 0.02. Plot 6 data (n1-n0) is about 255 at n = 12000. The z-score is then 2.33 and the one-tailed p value (from the CDF of the normal distribution) is, p < 0.01 (significant at the 1% level). Note, a one-tailed test is used because only hits are considered valid, while negative values of (n1-n0) are not.

A win rate (hit rate) of 0.6 at 100 trials gives a z-score of (60-40)/Sqrt[100] = 2.0, with a p value of, p < 0.023 - considered statistically significant.

Combining all the data from the 6 plots would give a very significant statistic, but this would not be scientifically sound for a number of reasons: 1) most importantly, these collections of data were selected from a larger pool of results, 2) the conditions under which each set of data was collected were likely different, and 3) the data collections are all of different lengths. Even so, the results you present are impressive.

An ME score of 6-10 indicates p values of, 0.016 < p < 0.001: all statistically significant.

I wouldn’t place arbitrary limits on how a more advanced trainer would look - doesn’t have to have a rolling ball. However, many details in the METrainer have been developed and tested over many years, so they should be given weight in any new design.

It’s interesting because at first I got an extremely negative win rate on the MED100kx3, thinking the device must have had some kind of defect when it got send to me. Thinking that it must’ve had some kind of bias towards outputting 0’s because of that.

About the p value, does that count for rounds? I consider a round whenever “reset” is clicked. In these graphs for example, there are multiple rounds that are quite succesfull, however others not so much.

For example if 3 rounds are done each 250 trials
Round 1 reaches p > 0.01
Round 2 reaches p > 0.84
Round 3 reaches p > 0.25

Was “round 1” a 1 in a 100 rounds? Does that mean that a 0.00001 is reached, that’s 1 in 100000 rounds? Despite the results of the second round? At least that’s what I always thought.

Also from the help file I’ve got these things: The average ME level produced by chance is 3.8 for any 100-trial session and 4.7 for a 200-trial session. And, long term testing will produce a p(HR) typically between .01 and .99. Since this is a statistcal measure, the p(HR) will exceed these limits 2% of the time.

Does that mean that the end result from the calculation of image 6 mirrors long term testing or could be obtained by long term testing? Or atleast 1 time in 100 long term tests of 12000 trials?

Is there an ME level or result that can’t be obtained from testing? Or is any level bound to happen if testing 24/7?

I would say first, everyone thinks the generator is responsible if something looks unexpected. Every MED100Kxx generator was tested for weeks, and then immediately before shipping. You can test the statistical properties using the QNGMeter, which is a good idea in general, but out of hundreds I never saw one fail after burn-in.

The p value we usually use is the probability the observed result could occurred in the absence of any influence. This is known as the null-hypothesis test. A p value of 0.01 means the result will occur by chance on average, 1 in 100 Rounds (also called series or blocks of data). The more improbable the p value, the more plausible it is that an alternate hypothesis may be present. Here, the alternate hypothesis is that there is an influence present. Note, the null-hypothesis test does not prove the presence of an alternate hypothesis, only how unlikely the observed result could have occurred completely by chance.

As I noted previously, it would not be appropriate to combine these results without observing strict testing protocols. When you reset, all statistics are returned to their initial values, which is a natural start of a new Round. A p of 0.84 means over 50% of the trials were misses and p of 0.25 means more than 50% were hits, but neither Round was statistically significant.

A p value of 0.01 means if you repeat the test a large number of times (many more than 100 times) without observing or attempting to influence the result, you will see p < 0.01 about 1 in every 100 rounds. The length or number of trials is not important for this test as long as it’s the same every time.

When I refer to “long-term testing,” I usually mean for days or longer. Theoretically, any p value or ME level can occur. The z-score for a sequence that is all 1s (or all 0s) is Sqrt[n], so 100 – 1s in a sequence of 100 trial produces a z of 10 with astronomically low probability. However, such sequences do not occur by chance, or at least not in the lifetime of the universe. Typically a z-score greater than 4 rarely comes up (about 1 in 32,000 rounds).

Have you found that during a session, after taking a quick break, then return you simply click a few times reaching a high combo of wins in a succession? At first I thought this was simply luck but it happens quite a few times.

About the p score for individual trials, is a lower p score easier to achieve on a higher byte size or not? I’m mostly accustomed to a trial time of 20 ms and the byte size of 200 in my own programs.

Am currently trying to improve on continous mode instead of clicking. Here’s some graphs for those interested.

Some of them are MED100kx3, most are PCQNG and one is PRNG.

These graphs are not numbered in chronological order by the way.

Also in these graphs, I tend to end a session on a good note, meaning after a good round.

I’m really interested in seeing how far this can be taken.

Enjoy.

There are patterns that can occur for individuals because we all have different approaches for achieving results. Different operators use more or less “physical” energy (tensing muscles, furrowed brow, etc.), but that usually only causes tiredness or headache, not best results. You can pause briefly for a break during initiated trials. Focus on the desired outcome is always required, but not too much. I might call this a relaxed or easy focus.

Yes, generally speaking using more bits will increase effect size (ES) proportional to the square root of the ratio of increased number of bits/original number of bits. Using 4 x the number of bits should increase ES by about a factor of 2. There are a number of considerations: Each of the bits used should be of the same quality (same generation method), the trial duration shouldn’t be too long or too short (I suggest 200-250ms), and how the bits are processed can be very important as well.

When your practicing or just experimenting, you can pretty much do as you please. This is more fun and you still learn and improve your skill level. For scientific data, the testing protocol must be carefully designed and followed exactly. However, we are working to far surpass the level of trying to convince the scientific community that mind or mental effects actually exist.

Image 10 shows a nice series, and the terminal (at the last point) z-score is about -3.1 with an associated p value of p < 0.0001, which would be considered highly significant if it were observed in an accepted field.

I’ve been practicing on a lower byte rate (400) and lower trial time (40 ms). The thought process is that as you said it should be more difficult to get results, but the results from the METrainer took 5 times as long to produce. Somethimes a session would be 20 or 30 mins. With lower trial size I can spend a lot less time so sessions are shorter.

There seems to be some kind of balancing force that activates from time to time even if a high score is reached, it will simply return to 0. Even switching target to target low wouldn’t help. Is this a real thing? Edit: Then again thinking about this, it might be the key were the most improvement lies as it feels the most uncomfortable.

Enjoy the graphs.

The method to produce is from this post:
The generator MED100kx3 generates 2000-2200 bytes per 190-200 ms, what you said is that I lost 10% somewhere, but I’m not sure where, could be the python library. I have a timer set to in this case (40 ms), then bytes are generated in sets of 2, in this case 256 bytes per call, which it will generate, then it will check if the time is reached, if not we generate again. We continue calling the device until 40 ms is exceeded. It will call this generate bytes function 2 times in 40 ms, if the first round takes 21 ms the total time will exceed 40 ms and will be 42 ms. Then we have 512 bytes, since we only use 400, the last 112 bytes are removed. Then we count all ones and zeros in the bytes using a byte lookup table except the last byte. From the last byte we only use the LSB.

I also got a bit interested in sub trials, for subtrials the ms time is half of the set trial time, so if 40 ms is set it will do 2 trials of 20 ms. Results will be [0,0], [0,1], [1,0], [1,1].

Suppose a [1,1] is a win. And everything else is a loss, that would make it 1 in 4 or 25%. In these graps we do a +1 on a win and and -1 on a loss. Should a +4 (or 4 times adding a 1) be added instead of a +1 and when a loss occurrs we add -1? (After creating it, it seems to be +3, that gives around the same feeling as normal trials, see the graphs below as an example).