An Unnecessarily Complicated method of Computing Pi, and a way of making your games look a lot more beautiful. A Shallow dip into the Monte Carlo Methodology.

At its heart, Computation is merely a new representation of maths. While computers help solve problems we couldn't think about solving, they make the process challenging with the constraints of what a computer can and can't do. The simplest method is rarely the best method, and this can lead to the solutions in computation appearing to become Unnecessarily Complicated. But it is rarely the case. Some of the workarounds that Computer Scientists use to solve the issue are truly beautiful from a logical perspective. And the Monte Carlo Methodology is one such thing.

At its heart, this methodology is to use randomness to solve problems that are deterministic in nature. That sounds very counterintuitive but it would make sense if you spare some thought to it. 
Integrating functions is extremely difficult, for various factors. In many cases, it's impossible to integrate functions by hand and is too complex for computers to do in a reasonable amount of time. In such cases, in place of integrating and calculating the value of the function, you take discreet summation of the values in between the two limiting values. The smaller the difference between the successive discreet values AND the more the discreet values, the better your output will be. Using random values in place of known discreet values is just going to increase our accuracy.

Let's try to compute the value of pi using this method.

If you take a square and inscribe the largest possible circle, the ratio of the areas of the two will be π/4
If we consider each point to be a tiny part of the area, then the ratio of the areas will be the ratio of the total number of points in the square, to the number of points in the circle. You could get these random points by many methods. You could throw darts on a paper and count how many land where. You could drop sand on a paper, and count the sand grains that drop where. Or you could just write a program in Python to randomly generate values and plot them.




I've written a python program that you can access here, and play around with for yourself. The first program makes a graph between the number of random values taken and the value of pi obtained. The second program calculates the value of pi and prints it out for you. 


As you can see, the value approaches the value of pi the more random values you can compute with. Using 900 million randomly generated values, the value of Pi that I obtained was 3.1415671537619407. Obviously, it's not the right value, but it's quite close
to the actual value, Much better than the original 4.02 we got with ten values.

Obviously, this method works, and the more values we take, the better our estimate gets. This is arguably one of the best examples to demonstrate this method, making its way from youtube videos to Wikipedia page, and even is one of the first examples taken when learning this concept in a course.

You can run and check with a higher or lower number of randomly generated values to compare with each other if you so choose to over here. Increasing the number of iterations will show that the value gets closer and closer over time. But increasing the number of random values will reduce the range over which it makes its predictions.

You might be wondering, how is taking 100 randomly generated values better than plugging in 100 sequential values between the limits of the function? If your computer could compute only 100 times, wouldn't it be better to use values that are more evenly distributed rather than random values?

Well,  honestly, the difference is minimal on an individual scale. Here,  I wrote a program to calculate the value of the integral of y = sin(x) between 0 and pi.
Individually, the difference between the two might seem insignificant. Running it once, neither reach the expected value of 2. One undershoots, and the random values can either overshoot or undershoot. But after running it 10 thousand times, the advantage is still hard to see. The Random calculation of the integral is all over the place. 
The advantage becomes obvious only when you decide to take the average of all the values.
Each iteration, the sequentially generated values are returning the same value (Red line), which is slightly off. The Random Values are all over the place(Blue Lines), but taking the average, it approaches the actual value of the integral, as depicted by the Yellow line.



All that seems wonderful, but what has this got to do with making your games looking more beautiful?

Ray Tracing was nothing new. The Rendering Equation describes the radiation (light) leaving a point, and how it's dependent on the light falling on the point. Various methods of computer graphics attempted to solve it, one of which being Ray Tracing. It used the Monte Carlo method of rendering images. This way, individual random rays are taken and their path is calculated. Taking many such rays, we'll get something close to a clear image. The more rays, the less noisy the image that's rendered. This is something that's done all the time in anything involving computer graphics, from the new Pixar Film to the Latest Red Dead Redemption game.

NVIDIA made headlines a while back with their GeForce RTX line up of graphics cards, that were supposed to be able to implement real-time Ray Tracing while delivering High Frame Rates in your games.

But given that we have less than 1/60 of a second to render the frame in gaming, we'll be getting a very noisy image since rendering is not complete. So how does the graphics card manage to make beautiful looking shots in your game in such a short time?
This is where their other marketing feature comes in. DLSS, or Deep Learning Super Sampling. With millions of pre-rendered images with and without noise, and at a higher resolution, a Neural Network is Trained on NVIDIA's supercomputers. This Neural network is then sent to your Graphics Cards via driver updates. The low-resolution noisy image that is rendered with Ray Tracing is fed to this neural network, which then outputs an image that's less noisy and has a higher resolution than a regular image. This would be much more efficient on the GPU than a traditional denoising algorithm. By dedicating cores for these two processes, the process of rendering other things in the scene, such as all the polygons can render without this taking up processing power. 

That's just the application in Computer Graphics. There are many more applications of the method that we can see. In physics, we Constantly see integrals that are extremely hard to evaluate, can be solved within minutes with a Python program like this. 

Similarly, running simulations in Biology to statistically track how genes have interacted and changed withing and between species, in order to track the progress of evolution over time, this method is often used.

Making models of climate change and atmospheric science often puts you with complex functions that can only be solved this way. Fluid Dynamics, Signal processing, quality testing are all complex problems that engineers face every day that are heavily simplified with Monte Carlo Methodology.

In America's state of Wisconsin, a program was proposed to help women petitioners be successful in their applications for restraining orders, to help provide them greater advocacy and hopefully reduce the risk of rape and assault. When doing a cost-benefit analysis, it was found that way too many variables could not be determined. There would be so many subcases and individual scenarios to check, that is going to be really complex and hard to make sense. So a similar approach to Monte Carlo was taken to find how effective the proposed plan would be. The resulting analysis showed overwhelming net benefits of bringing the program into effect. 

Similarly, Risk-Benefit analysis of many financial problems, investment benefits, financial derivatives, simulated project schedules, and many other applications appear in Finances and businesses. 

The idea of using randomness to solve deterministic problems seems so absurd, counterintuitive, and pointless. And yet, it's one of the most beautiful concepts in mathematics. This perhaps the best example of order appearing in midst of Chaos. Mathematics and Randomness appear everywhere around us, and yet these are things that go unnoticed everywhere. Understanding exactly how they're applied in things around gives us a much better appreciation of the things around us, an Attitude we are slowly losing in these modern times.


Footnote: if you want to check out my code for how I generated those Graphs, you can check it in a Google Colab notebook that I made, which you can access at this link.
In there, you can run the programs yourself, and I have left suggestions on what you can change to look at what factors influence the output we see. 
I have also left links to youtube Videos, where the method that I adapted into the code is explained.

Comments