Austin 3D Users Group, Wed. July 27, 2005 Hosted by Zebra Imaging.

MEETING NOTES

This meeting was hosted by Zebra Imaging at their facility and was quite well-attended. Wow! The first 30 minutes of the meeting was dedicated to browsing their impressive holographic display gallery. If you haven’t seen their work, be sure to watch the movies on their website.

Some of the holographic tiles are mounted on the wall, some rest on the floor, but the images appear to project away from the flat surface that contains them. As you walk around the images they change, just as if you are walking around a tangible real-world object. Amazing!

Dan Siegle of Zebra Imaging shared a few tidbits about their process. A 2ft x 2ft display takes up to 81,000 frames to create! And their new high resolution technology takes that many frames per square foot! No wonder they have such an extensive render farm.

It must have been quite a sight as we attempted to see each image from all possible angles. Some of us were even crouching on the floor to look at the underside of the car holograms.

Thanks Zebra!


First up was show-and-tell from Dennis Thompson.

Note: This was a very simple and interesting presentation. Dennis saw what was shown at past meetings and was inspired to try it out in his own way. He only had Deep Exploration and Deep Publish for a few weeks before this presentation. Right Hemisphere is already using Dennis’ example in their demos. Amazing what you can do in such a short amount of time!

Dennis shared a large 3D scene that he created in 3dsMax consisting of about 3million polys. He noted that the scene’s textures originally were about 100MB. The resulting file in Max was a little more than 128MB.

He used Deep Exploration (see prior meeting notes) to translate the scene into a 46MB .RH file and used Deep Publish to create a 99MB MS Power Point file. He published to .pdf from Power Point and the resulting file was 26MB!!!!!!

He took us on a walk-through of the environment within the .pdf document! This is an excellent follow-up to the presentations we received over the last few months. I should point out that Dennis has only had this software a few weeks. After our last meetings he decided to give it a try. Dennis works on a shoestring budget, but the results were quite impressive.

We should point out that if you want the Poly Reduction Tools (shown at a previous meeting) when using Deep Exploration, you’ll need the CAD Tools module for this feature. Dennis reduced his poly count in Max before translating the file.

Q: (Bill) did you try to cut your polys in the scene before creating the pdf?

A: yes, mainly on the trees. I reduced the foliage.

Q: Once someone receives a pdf file like this, can they open up the file in Max?

A: No, you can only print different views. The 3D data is not accessible. (i.e. if you're sharing a concept model with a client, they won't be able to walk off with it)

Q: what are the system specs?

A: very minimal.

Q: Does it need to be viewed on a card with Open GL?

A: no, but there are certain limitations. the .u3d format doesn't support bump maps, but it does read all the Max shaders. This map has procedural textures.

Note: u3d doesn’t support bump, opacity, or specular color. But .rh format for Power Point will support these. Sometimes RH will think that file has textures that actually weren’t assigned to anything in Max. Make sure you look over the list and delete unused materials and you’ll get better translations.

Q: How long would this take to create?

A: this .pdf? It took about 20 minutes. I did spend about an hour or so getting acquainted with the product at first though.

It can do animations in .pdf but you need Java Script to make that happen. Office docs are very easy for creating and sharing basic animations...especially Power Point.

It will not do particles or effects. It does have LODs (levels of detail) which is very cool.

Q: all you need to see the file is a regular .pdf reader?

A: yes, the most current version of Adobe Reader is required. It's a free download. But to make the .pdf you will need Acrobat 7 Pro.

Q: How about colored lighting?

A: Yes, it has lots of options (lighting, outline, shading, etc) but I've only had this a few weeks and haven't explored all of those options yet.

It can run on the web via HTML.

Q: Which products are needed for this process?

A: Deep Exploration to convert the file to .u3d. You can use Deep Exploration ($149.00) with Acrobat 7 Pro to create a .pdf. You need Deep Publish ($99.00) to place these 3D files into MS Office documents (including Excel, Word, & Power Point).

If you're using Deep Exploration alone with Max, it will create a .u3d file, but you'll need something to put the file in...like .pdf or .ppt for example. If you’re translating a Max or Maya file, you’ll need to have licensed copies of those packages on your system. Deep Exploration also translates Max to Maya and vice versa, which worked well with the Doom Mod.

See a .pdf table of import/export files here

Note: Dennis found that really large files were better to output to .RH file using Deep Exploration, then output as .u3D from Deep Publish.

Note to the group: Dennis just accepted a job in Louisiana with a Defense Contractor.

Thanks Dennis! Good luck with your new adventures!


Next up, Johnathon Vought was to show us a pre-prepared Combustion demo from a class he taught at SXSW for the Digital Media Academy.

Unfortunately, the files were missing, so he talked a bit about some Combustion rotoscoping processes instead.

Johnathon: Who here has rotoscoped? (~10 folks raise their hands)

Talked about process: Compositing is often used to refer to putting green screen footage into a scene.

As far as formats go,.jpg, .png work well. .rpf is great, because you can render stuff in Max and it makes a file with lots of data, like alpha, zdepth, motion blur, UV, lighting, normals, etc

Q: Is rpf format Discreet (Autodesk) only?

A: (Ruben) No, but rpf is really robust in Max. You can light the scene in Post, for example.

Johnathon opened up the Schematic view (node-based like Max’s schematic view) and talked about image sequence. The Combustion workflow works like this: Import footage, right-click in schematic view, add operator to create mask. Decide what kind of mask (box, oval, etc) you want to work with (zoom with middle mouse like Max)

Once you’ve decided on your basic shape, you can adjust it with Bezier handles.

Split view allows you to see your video in one pane and the schematic in another.

* In Stack, double click on subobject, choose "animate."

Do the mask by cutting the timeline in halves. Set your splines at the beginning, adjust them at the end, then again at the midpoint. Then you continue working at midpoints (between keys) until you have smooth shape transitions. This is very much like setting up 2D walk cycles. Let the computer’s interpolations help you. Once you’ve done this a few times, you’ll get a feel for which points in the animations will make smoothest adjustments. For example, there might be an apex in the movement where you’d need to place the key frame. Soon, your well-placed keyframes will be in sync with the motion and the transitions will make sense.

This is a great place to start to be a Flint/ Inferno artist. Local studios need people who can do clean, smooth masks.

Roto-matte refers to the spline adjustment process of rotoscoping

Q: (Garry) Is there a Primatte plugin?

A: Primatte helps deal with removing green screen background. Combustion will automatically remove green screen backgrounds (composite/ color mask) You can get a Primatte plug-in if it helps.

*Bring it in at Frame zero so that clips are lined up. If time slider is set at 20 and you import a clip it will automatically go to the slider position. You can move the clip, but it will save you time to do it right.

Q: (Carl) Advantages of Combustion vs after Effects?

A: Overall node-based workflow and the -shapes (spline) Combustion also has particle effects (fire, water, smoke, fog)

All the AfterEffects plugins work with Combustion too.

Q: Exports to?

A: AVI, Quick time, video or still, flash, vector animation

Q: (Josh) How hard is it to do cartoony linkletter type stuff?

A: (Ruben) There's a paint effect system. You do need to have skills to make that work.

Q: (Jamie) What's a garbage matte?

A: a simple shape to remove junk from your scene, for example a microphone boom visible in the top of the scene. A garbage matte is a quick way to get rid of it.

You can also turn vector shape (splines, vector paint, etc) to an alpha map (like Quickmask) in Photoshop to adjust your selection, add blurs, etc. Sometimes pixels will produce a really good result, since you can see the actual results of your selections.

Johnathon mentioned that Gary Walker does individual strands of hair for his garbage mattes, adding strands in layers at various degrees of opacity. This process takes tons of work but gets incredible results.


For the final segment of the evening, Ruben Garza of Autodesk Media & Entertainment talked with us about rendering and compositing:

Combustion uses B Splines, which have 3 degrees of control (x,y,z) for each control point, and are easier for rounded edges.

Normal splines use 6 bits of data per point. It was so much work they added B Splines -xyz-

Note: Combustion’s capsules provide the means for creating a completely customized operator. A capsule takes a group of controls & encapsulates it to process complex effects easily, and share the effects with members of your studio. For example, grouping keyer and color corrector setups or perhaps create a “film-look” capsule to add glow, color correction, and grain to footage to make it look like old stock. Capsules can be nested within other capsules in an unlimited number, creating complex processing nodes that can be applied to your workspace quickly and easily.

(this feature is especially good if your team members don't know how to use the entire package)

Ruben opened 3DsMax to show a simple architectural scene brought from REVIT into Max, with a single light source outside the windows to represent the sun. He noted that Max has 2 types of lights, and the render time is the same with each:

STANDARD LIGHTS:

-intensity 1x, 2x, 3x

-color controls

PHOTOMETRIC LIGHTS:

-temperature (e.g. fluorescent light, etc)

-brightness

-exposure (like fstop on a camera)

Ruben noted that the photometric lights have far more features, so they’re much better to use. To demonstrate photometric lighting, he changed "distribution" to web, then brought parameters from real-world lighting products into Max. He picked the light he downloaded from a specific manufacturer’s site. Try it! Anyone can get these files from lighting manufacturers. Google them.

Ruben showed us the scene in walk-through mode. He did an old-style render. Then he talked about how Radiosity calculates light as it bounces from poly to poly. As a result, large polys give bad results, so use a subdivided mesh. Using adaptive subdivision will subdivide according to the scene to distribute the light better. Radiosity is a static solution, meaning you can render a scene from any angle, but objects in the scene won't cast shadows because the lighting is baked in.

Q: can you apply radiosity to selective areas?

A: can use hybrid solutions where you can use render elements... showed exposure controls

(benefit is you don't have to set controls on each light) *just keep settings

Q: how much radiosity is excessive?

A: 18% is good enough. In general, settings higher than that will slow down your scene unnecessarily, unless you’re going for some sort of scientific accuracy.

Radiosity vs mental Ray:

Radiosity: calculates each angle, bakes in lighting, and represents a solution for 1 point in time.

Mental Ray: only based on perspective, requires less calculation, and has the added benefit of 8 render nodes for network rendering.

Q: (Garry) are all the lights in the scene photometric lights?

A: Yes, as indicated by the symbols

Leaving default settings in Mental Ray will give a splotchy result. Here is a recommendation:

500 is low - good for preview

5000 – higher quality, good for production

Final Gather is slow (better to up samples). Disable Final Gather and crank up sample settings.

Note - if you just use final gather - you might get an interesting result. One group member noted that he used Final Gather and had a very artistic result which had an “oil painterly” feel to it. A happy accident! (though not recommended for most work)

Q: what kind of shadows to use?

A: Use ray-tracing shadows; area shadows take forever

Q: (Garry) What does it mean when your render results in black dots on the screen?

A: That error sounds familiar. If you used a bump-map, then that’s a known issue that’s been addressed in the current version.

Q: for render nodes does dual processor count?

A: depends on how your computer sees itself. If it’s a hyperthreading processor that thinks it’s a dual, then it will count as two render nodes, just as if the computer sees two actual processors, you will need two render nodes to go with them.

Finally, Ruben talked about Render Elements- the feature that ties Max and Combustion together so effectively.

Render elements give you more control of a scene.

* Did a shadow pass for each element

Garry: The G-Buffer Builder (graphics buffer) is a way to use various elements to build final image.

You can render elements to PNG (small file), RPF (large file with lots of useful data), RLA, etc. RLA is an old filetype that didn't support as many channels as RPF.

Note: RPF or RLA formats are rendered from Max, and contain scene information not contained in other file formats. The G-Buffer builder operator is used to create channel information, which is used to apply 3D post filters (zdepth, camera matching, shadows, etc) to a scene.

Ruben showed us how the Z buffer works to represent the depth of a scene in a greyscale (alpha) image. White = close up, black = far away.

The shades of grey are varying degres of depth in between.

Most of us think of Z-depth as a useful for CG scenes. He showed us how to apply a Depth of Field filter to a film shot in post production. He threw on a greyscale Paint operator and indicated which areas of the scene were up close, and which were far away. This approach can be used for many effects, like 3D depth and 3D shading

Fake lighting + DOF in scene, text elements, and morphing (once cutting edge, but now a tired effect used primarily for subtle changes between scenes)

Neat! He used a still image for this example, although the effect can be applied to both stills and motion.

Ruben talked a bit about Batch rendering. The new Scene States feature allows you to save your scene with various settings, with some objects selected or deselected, different lighting effects, pretty much anything in a scene can be turned on or off for scene states. Then you can save all of your scene states and render the lot of them using Batch Render. It’s like layer comps in Photoshop, only far more dynamic.

Q: is scene states new in 7.5?

A: yes. You can set up tons of states, but you’ll have to render them all if you’re using batch render.

Q: Can you put shadows from one state onto another?

A: You can render shadows into a separate element, then composite them into a the scene separately.

-RPF Image File Format window gives you many options. If you choose to export UV coordinate systems it will make file sized huge.

Q: Can Combustion be used for in-game cinematics?

A: Yes.

Many of you picked up a copy of Combustion Unplugged. Enjoy! When you’re ready to try it out, give Jamie a call to see about current promotions.

See you again in August!

Jamie Crawford
Animation Representative
Austin Business Computers, Inc.
512-328-4747
animation@ausbcomp.com

Austin 3D Users Group, Wed. July 27, 2005 Hosted by Zebra Imaging.

Zebra manufactures the only machine in the world that can print holographic images on film of 3D scenes in full color that are viewable under a normal light.

Contact Information:
Zebra Imaging, Inc.Headquarters
9801 Metric Blvd, Suite 200 Austin, TX 78758-5455
512-251-5100 phone 512-251-5123 fax
information@zebraimaging.com Sales & Marketing 512-251-5100

Background of Zebra
Founded in 1996 by graduates of the Media Laboratory at the Massachusetts Institute of Technology, Zebra Imaging has rapidly become the market leader in high-quality, three-dimensional holographic imaging. Founders: Michael Klug, Mark Holzbach, and Alex Ferdman.

The mission of the company was established at that time: to develop and provide the best technologies and products for three-dimensional visual communications. With initial and continuing support from Ford Motor Company and other investors, the company has focused on developing digital holographic imaging technology over the last 6 years, and succeeded in producing the world’s largest and most unique holographic images for application in a variety of markets.

Zebra Imaging has developed unique full-color digital holographic recording technology, and a number of products based on this technology, over the last nine years. To date, Zebra is the only company that offers an autostereoscopic, full-parallax (viewing of the image over and under and side to side), full-color display for three-dimensional digital imagery. Zebra’s technique produces images that are scalable to any size, and yet portable, since they are recorded on very thin modular tiles. Unlike single-view and binocular 3D displays, Zebra holographic images enable intuitive image-based collaboration, since they require no special eyewear or head trackers, and display all perspectives simultaneously for independent access by any number of viewers. The wide angle of view of Zebra holographic images can accommodate up to 100 degrees of look-around, or it can be subdivided into channels to show short animations or peel-away and overlay views. It has served companies such as Honda, Ford, Peugeot, Boeing, Exxon, Brigham, NASA, ARL, etc.

The company is composed of some of the foremost experts in 3D imaging, and systems design, development and engineering, and has sought to augment its capabilities by forming relationships with key precision machine manufacturers. Zebra has developed significant intellectual property in its core field, having been granted 23 patents, with 12 others pending in the US and 8 foreign pending. The company has raised $14 million in equity offerings thus far. Zebra continues to be privately held, with Ford, Convergent Investors, and DuPont as strategic shareholders.

Zebra Imaging is focused on visualization applications with its current technologies, with primary markets in automotive and manufacturing design and engineering, defense, aerospace, and petroleum and gas exploration.

Technology of Zebra
Zebra’s digital holographic technology enables collaborative 3-D viewing in full-color without viewing aids, such as glasses or goggles. This unique technology produces images with full-parallax, exhibiting natural perspective change as the viewer moves in any direction. The technology provides high-resolution images in monochrome or color, without distortions. Digital holographic images are scalable to any size, can contain multiple unique images and animated sequences, and can be displayed in multiple formats. Unlike other 3D display technologies that require complex projection systems, Zebra holographic images are activated with a simple, single light-source.

Zebra Imaging is currently focused on visualization applications with its current technologies, with primary markets in automotive and manufacturing design and engineering, defense, aerospace, and petroleum and gas exploration.

Check out their holographic displays.....very exciting!  http://zebraimaging.com



Notes on Previous Meetings of Austin 3D Users Group