>>16717996What? You have to structure your thoughts more clearly, this is incomprehensible up to the last two sentences. I did not compare wins against misses. My code, although bad, is quite clear I think.
What I did in the simulation is count how many times a draw of 13 cards is contained in another draw of 31 separate cards when I keep repeating this experiment, then I divided the ending total count by the number of experiments and converted to a percentage. This will converge to the true probability when the number of simulations goes to infinity.
If you want to convince yourself of the math, read up on https://en.wikipedia.org/wiki/Hypergeometric_distribution or google examples of that distribution in practice.
But there's no need to check the math if you don't understand it, just verify it empirically. As the true probability is quite small, you "need" quite a lot of simulations so here's faster code if you want to test it yourself, if your pc has more than 6 processors you need to change procs variable:
[code]from multiprocessing import Pool
import random as rnd
init_seed = rnd.randint(1,100)
total_iters = int(25e6)
def worker(seed_and_iters):
seed, its = seed_and_iters
rng = rnd.Random(seed)
deck = list(range(52))
count = 0
for _ in range(its):
a = set(rng.sample(deck, 13))
b = set(rng.sample(deck, 31))
if a <= b:
count += 1
return count
if __name__ == "__main__":
procs = 6
its_per = total_iters // procs
args = [( init_seed +i, its_per) for i in range(procs)]
with Pool(procs) as p:
counts = p.map(worker, args)
total = sum(counts)
print( (total/total_iters) * 100)[/code]