I think most of you may have heard of the concept of dome theaters. Basically, the screen of these type of theaters are not a flat plane, but instead a sphere-like surface.

A dome theater

Usually they use special projectors to generate correct images on the screen. But have you ever wondered what it is like if we use a normal projector on one of these screens? With OpenGL and Python, we can develop a simulation program for this scenario very quickly.

How to acquire the image?

A simplified camera/projector model

For simplicity, we are using a very simple mathematical model for the camera and projector in this scenario, which is illustrated as below.

Camera/projector model

There are two components in this model, namely optical center and image plane. The optical center is right above the center of the image plane, and the distance between optical center and image plane is called focal length. On the image plane, we assume that the coordinate system is identical to the ones of OpenGL textures, which is shown as follows.

Coordinate system on the image plane (source: learnopengl)

Compute the image by ray tracing

Ray tracing is a technique for rendering photo-realistic images. We can compute the projection on the spherical surface by using a modified version of ray tracing. The idea is shown in the following figure.

Modified ray tracing

When we look at the screen (which will appear on the camera’s image plane), our sight eventually lands on a point on the sphere (as shown by the red ray). Every point on the sphere may be projected on by some point on the projector’s image plane. In order to find out the corresponding pixel value of a point on the camera’s image plane, we can draw a line (as shown by the blue line) between the screen intersection and projector’s optical center to see if it intersects with projector’s image plane. If so, then the two points will have the same color. More specifically, introduce the following notations:

  • : the optical center of camera
  • : the optical center of projector
  • : the focal length of the camera
  • : the focal length of the projector
  • : the bottom left corner of camera’s image plane
  • : the and axes of the camera’s image plane
  • : the bottom left corner of projector’s image plane
  • : the and axes of the projector’s image plane
  • : the normalized normal vector of camera’s image plane pointing away from the optical center
  • : the normalized normal vector of projector’s image plane pointing away from the optical center

Obviously the normal vectors can be computed according to the and axes. The equation of camera’s image plane is given by

where is an arbitrarily fixed point on the plane. It is reasonable to take .

Similarly, the projector’s image plane is given by

where is an arbitrarily fixed point on the plane. We can also take . Now for a given 2D location on camera’s image plane (of course, ), its 3D coordinates is given by

Therefore the normalized direction of the red ray in the illustration is given by

Introducing a parameter , the equation of the red ray is Assume the equation of the sphere surface is given by

where . Combining it with the ray equation, we have

This is essentially a quadratic equation that is fairly easy to solve. For the solutions, there are three situations, and we only consider the one where the ray intersects with two points on the sphere. For the values of those two intersections (denoted by ), there are four possibilities:

  1. or : if the intersection with the hemisphere has , then the screen is visible.
  2. : the screen is not visible because the sight is blocked by the back side of the screen
  3. : the screen is not visible because there is no intersection

Now we only consider the sole intersection with the hemisphere, which is denoted by . The direction of the blue ray is given by

Introduce another parameter , we can write the equation of the blue ray as . Combining it with (), we have

which allows us to compute the intersection between the blue ray and the projector’s image plane. Actually we should do the same intersection detection to make sure that this ray does not intersect with the screen. But for simplicity, we ignore the step here. To retrieve the location of on the projector’s image plane, which is denoted by , we can solve the following equation:

where is an arbitrary vector that is NOT a linear combination of and (for example, ). Because we know that is on the plane spanned by and only, is guaranteed to be zero, so the choice of does not matter. Therefore, we can just consider . If both and are in the range , then the color of point on camera’s image plane will be equal to the color of point on projector’s image plane.

Note that this idea can generalize to any type of surfaces, not only spherical ones.

OpenGL implementation

The entire process can be implemented in shaders and therefore be accelarted by GPU hardware. To determine the position of a fragment, the fragment shader will receive a parameter called gl_FragCoord. It is the window coordinates of the fragment. To find the corresponding $\bm{l}$ vector, we can pass in the size of the window as a uniform and compute the quotient.

Demo

The demo can be found in Tutorial_4 folder.

Basic usages:

  • Use mouse and “wasd” to look/walk around
  • Press Esc to exit
  • Press “p” to take screenshots
  • Press “o” to switch between controlling camera/projector
Looking directly into the screen
Looking from the side

In this article I discuss how to use this demo as an image generator to achieve projection correction.

Appendix

TikZ code for the illustration

\documentclass[convert={density=500}]{standalone}
\usepackage{times}
\usepackage{tikz}
\begin{document}
\begin{tikzpicture}
\tikzstyle{every node}=[font=\tiny, inner sep=0pt, outer sep=0pt]
  \begin{scope}[xscale=0.8]
  \draw (0, 1) to[out=0, in = 0, distance=1.8cm] (0, -1);
  \shade[ball color = gray!40, opacity = 0.4] (0,0) circle (1cm);
  \draw (0,0) circle (1cm);  
  \end{scope}

  % view camera
  \draw[draw=black] (-1.5, -2.5) rectangle ++(0.8, 0.5);
  % projector
  \draw[draw=black] (0, -2.5) rectangle ++(0.8, 0.5);

  \node[yshift=8pt] () at (0, 0) {\parbox{1cm}{\centering{screen\\intersection}}};


  \node[circle, fill=black, minimum size=1pt] (eye) at (-1.1, -2.8) {};
  \draw[red] (eye) -- (0, 0) node[circle, fill=red, minimum size = 1.5pt, pos=0.2] {};

  \node[circle, fill=black, minimum size=1pt] (proj) at (0.4, -2.8) {};
  \draw[blue] (0, 0) -- (proj) node[circle, fill=blue, minimum size = 1.5pt, pos=0.8] {};
  \node[circle, fill=black, minimum size=1pt] at (0, 0) {};

  \node[yshift=-8pt] at (eye) {camera};
  \node[yshift=-8pt] at (proj) {projector};

\end{tikzpicture}
\end{document}