
Ari Rapkin Blenkhorn PhD student in Computer Science University of Maryland, Baltimore County ari.blenkhorn@umbc.edu 
Office: ITE 352/365 (VANGOGH Lab) http://www.umbc.edu/~ablenk1 https://www.linkedin.com/in/ariblenkhorn CV (April 2018) 
primarily glories...  also coronas....  and rainbows. 
I have developed a highlyparallel GPGPU implementation of the Mie scattering equations which accelerates calculation of perwavelength light scattering. The Mie calculations for each scattering angle and wavelength of light are independent of the calculations for any other, and can be performed simultaneously. My implementation dispatches large groups of these calculations to the GPU to process in parallel. I use a twodimensional Sobol sequence to sample from (wavelength, scattering angle) space. The 2D Sobol technique ensures that the samples are welldistributed, without large gaps or clumps, thereby reducing the number of scattering calculations needed to achieve visuallyacceptable results. The Sobol sampling calculations are also performed in parallel and use a recentlydeveloped technique which precomputes partial results. Overall this work renders atmospheric glories at much faster speeds than previous serial CPU techniques, while maintaining high levels of visual fidelity as measured by both physical and perceptual image error metrics. Additionally, it yields equivalentquality results with far fewer Mie calculations. The results obtained for glories apply fully or in part to related atmospheric phenomena.
My goal is to produce perceptuallyaccurate images of atmospheric phenomena at realtime rates for use in games, VR, and other interactive applications.
Poster presented at SIGGRAPH 2015. 
Collaboration between UMBC computer science researchers and Howard Hughes Medical Institute neuroscientists.
We have created a suite of automated tools to calibrate and configure a projection virtual reality system. Test subjects (rats) explore an interactive computergraphics environment presented on a large curved screen using multiple projectors. The locations and characteristics of the projectors can vary and the shape of the screen may be complex. We reconstruct the 3D geometry of the screen and the location of each projector using shapefrommotion and structuredlight multicamera computer vision techniques. We determine which projected pixel corresponds to a given view direction for the rat and store this information in a warp map for each projector. The projector uses that view direction to look up pixel colors in an animated cubemap. The result is a predistorted output image which appears undistorted to the rat's viewpoint when displayed to the screen. 
Poster presented at SIGGRAPH 2016.