images still to come
Your projection target is a flat surface, like a wall. Your content is unrelated to that flat surface: you just want to transform a part of that projection target into a rectangular screen... Using Homography (Transform 2d) you can easily distort the image you project, so that it appears undistorted on the surface. This always works: when the projector is rotated against the surface, its lens arbitrarily shifted, from any position.
You want to project onto an arbitrary surface like the facade of a house, a scultpure, a dome or anything else. Since the surface is not flat, a simple projection onto it looks distorted.
That is:
Because what you really want is a surface-orientated projection, like if the image on that surface would be some property of that surface and the projector wouldn't even exist. Lets call that surface property a texture. You would expect that texture looks the same, independant from the angle towards some projector position - note that the projector is just the stupid hack to realize your wanted surface property.
But since you're stuck with that hack, in order for the projected image to appear undistorted on the surface, you need to distort your image according to all the features of the target surface.
Incredibly enough that distortion is theoretically quite easy to achieve: it can be realized by having an exact virtual replica of your real scene,
By rendering the 3d model from the same viewpoint and using the same lens-characteristics that the real projector has, the resulting image will perfectly fit the projected surface. Any flat textures you give the 3d model will look undistorted on the real surface and therefore become the textures of the real architecture we earlier talked about.
In the real space a projector beams straight rays of light from ONE point (its ligth bulb) onto the different points of the architecture (lets concentrate in the drawing on some of them). We call it a projection of a 2d image into 3d space. Go to that projector position and look onto the scene or make a picture. What you see is based on the same principle but the other way around! There are rays of light coming from each point of the architecture to your eye/lens (same ONE point in space) making the 2d projection of the 3d scene. You can explain both with the same dawing, only by switching the direction of the ray arrows. It is the same or better the exact reverse.
abstract:
Record = Anti-Project and Project = Anti-Record
with
Record (3d) -> (2d) Project (2d) -> (3d)
If we render that exact scene in the virtual space from the same angle (but with the desired textual properties on the model) we get the same image that the real projector "sees", but with that additional textural information. By projecting the same way as we recorded the real model gets enriched with the surface textures of the virtual model.
This works perfectly - in theory - but chances are that your virtual scenes parameters will never exactly match your real world parameters, as some of them, like the projectors rotations, are hard to measure. But never mind, there begins the manual tweaking of those parameters and with some patience and practice you'll get quite good results.
To build a virtual copy of your 3d scene you should use
or
or
We showed how to turn boring real objects into real objects with texture.
Others may still call it "undistorting projections", but indeed it is all about texturizing real objects.
So if you now have a complicated setup, think of what texture you want to have on your real architecture and then head for texturizing your virtual copy.
One of the most astonishing things is when a sculpture with many faces is textured differently on each face. The real architecture orientated texturing paying respect to the aspects of the real geometry is a thrilling thing.
So, you need to 2d-texturize your virtual copy? Therefore you need to build a mesh with different sets of texture coordinates to be able to put the different textures onto the same model. Another solution is to export many meshes for the differently textured parts and render them one after the other, with different texture sources...
However keep in mind this kind of texturing is 2d Texture Coordinate Based.
semi-abstract
Wallpaper: (Texture.xy) -> Color
There is something like a 3d-Texture. Think of a tree which is textured at each point within its trunk; you just need to cut it to see that point.
3d-Textures can be loaded via FileTexture, but you also can create them on the fly via some tricky pixel shader code, spitting out a color dependant on the Position in Space.
semi-abstract
Tree: (Position.xyz) -> Color
Imagine a Cylindrical 360°-Projection or a Dome projection, where you already rebuilt the screen and adjusted real and virtual projectors and all that. All that was just to undistort the projection, pay respect to the curved screen and the shift and orientation of the projectors, right?
But when it is about the content you don't want to pay respect to that boring screen. It is not like a sculpture which is interesting by its own. You want to turn it into a room again.
So we are talking about a different kind of projection here. A projection not related to real projectors, but to the 3d content we want dive into, unrelated of which displaying technology we use. It is more that we want to have something like an environment map of our content.
Imagine you stand inside a dome projection and look towards all sides from one point in space. What you want to see is a room, not the screen. So what we need is a projection from some the desired 3d scene to our 2d screen in a way, that if that is projected back to your eye, that you feel like you're in this room.
So what we actually need is a projection of the whole scene from ONE pivot point, so that we are able to look around and see one consistent 3d scene.
Theoretically this only works perfectly, when the viewer is at that pivot point from where we projected the content onto the screen, but what can you do, right? XD
So let's think of the whole setup these steps:
At the end it is the same idea all the time. Shoot something and project it back. This time with the difference, that you shoot a 3d scene which doesn't exist in the real world (and vice versa the real world screen doesn't exist in the recorded virtual world) -- the captured, reprojected light rays now just travel as long as they hit the screen. This time not paying respect to the perspective of projectors but to the perspective of the viewer. This time with the ability to move around or rotate the camera in the content scene, since it is not connected to the other worlds.
One way to capture the virtual scene is by using a slitscan shader within your animation software and use that long flat video and put it as a 2d-texture onto your cylindrical model.
This simplifies everthing a little, but as gives less degrees of freedom. If you just use the 2d texture coordinates of your screen, no virtual projection is involved and the pivot point is always in the middle if the texture coordinates are regularly spreaded across the screen. So this is only half of the idea, but enables you to use slitscan video.
semi-abstract
Wallpaper (Texture.xy) -> Color
As long as you can get your fingers onto several regular perspective views of the scene, which together hold all content needed, then use the perspective views and project them onto the virtual screen. Also the rendering time within an animation software will probably reduce drastically.
When you have one computer per projection and also want to have generic 3d content, then you want to just record as much content as necessary for one projection. You can do that. And you don't need to do it slitscanwise. Just render the scene perspectively on all computers from the same pivot position (but differing rotation) and project them back to the virtual screen. No need to undistort anything again. The virtual rays do it for you.
Just take care that you
The optical laws somehow hold in virtual space.
Demod here:http://vvvv.org/tiki-download_file.php?fileId=1698small 360 degree setup.zip (35.95 Kb)</a>
semi-abstract
Record (Cone, 3d scene) -> (2d Texture) Project (Cone, 2d screen in 3d) -> (2d Coordinates on Screen) Wallpaper (Texture.xy) -> Color
here the Content Scene = Virtual Scene ( = Real Scene anyway )
so the screen actually is what is
the architecture is the screen and but the content.
there is another record & back-project technique used in here however, it is needed for the perspective projection of light, resulting in shadows...:
also see
How To Project On 3D Geometry
Note that the projection will only look perspectively correct when viewed from the one point in real world that corresponds to the virtual cameras position.
From all other points in real world the projected image will not look correct. But never mind, in most cases this isn't a big deal as our TV/cinema trained eyes and brains are quite tolerant with such visual challenges as long as our viewing position isn't too far off the actual position. see:http://users.skynet.be/J.Beever/pave.htm
auto calibration solution:http://www.domeprojection.com/
anonymous user login
~1d ago
~1d ago
~9d ago
~11d ago
~13d ago
~16d ago
~16d ago
~23d ago
~30d ago
~30d ago