Thursday, January 28, 2010

WebGL first experiment

WebGL will be for us a great revolution. 3D scene and website will change our way to see the web. But also a new way to build web based application !

A good initiative from members of the Rgba demo team have released a web based shader editor : Here


The only drawback is that for the moment only some nightbuild of FireFox and Chrome can handle WebGL norm. (So it's not mainstream for the moment...)

Another website put WebGL in emphasis with JavaScript binding :

JSlibs

 Canvas 3D JS Libary (The Mozilla default choice)

GLGE

In what WebGL with Javascript differ from O3D (the WebGL Google wrapper) :

It first the fact that Google have estimated that Javascript language is not interpreted so fast...

Quote :

>>O3D is not going away. WebGL is a very cool initiative but it has
>>a lot of hurdles to overcome. The direction of WebGL is trying to
>>just expose straight OpenGL ES 2.0 calls to JavaScript. 

>>JavaScript is still slow in the large scheme of things. Maybe at 
>>sometime in the future WebGL will have added enough features over
>>basic OpenGL to be more powerful or JavaScript will have gotten 
>>a few orders of magnitude faster but at the moment…
>>…
>>The WebGL team at Google and the O3D team are currently the same
>>team. We have every interest in seeing both WebGL and O3D succeed.

Source o3d differences

From my part I want to see entire application build in WebGL and no only display...

Wait and see ...


Wednesday, January 27, 2010

AMD Stream SDK 2.0 release


ATI Stream SDK 2.0 is the first production SDK for both AMD GPUs and x86 CPUs.

What’s New in v2.0 :
First production release of ATI Stream SDK with OpenCL™ 1.0 support.
New: Support for OpenCL™ ICD (Installable Client Driver). 
New: Support for atomic functions for 32-bit integers.
New: Microsoft® Visual Studio® 2008-integrated ATI Stream Profiler performance analysis tool.
Preview: Support for OpenCL™ / OpenGL® interoperability. 
Preview: Support for OpenCL™ / Microsoft® DirectX® 10 interoperability. 
Preview: Support for double-precision floating point basic arithmetic in OpenCL™ C kernels. 
Updated cl.hpp from the Khronos OpenCL working group release. 
Various OpenCL™ compiler and runtime fixes and enhancements (see developer release notes for more details).

Link & Source : Here

Monday, January 25, 2010

PTex is now OpenSource


Ptex: Per-Face Texture Mapping for Production Rendering is now OpenSource.

Ptex was used on virtually every surface in the feature film Bolt, and is now the primary texture mapping method for all productions at Walt Disney Animation Studios. So now even Mickey is doing OpenSource stuff for the happiness of everybody !!

To resume Ptex addresses all these issues by eliminating UV assignment, providing seamless filtering, and allowing any number of textures to be stored in a single file. See http://www.disneyanimation.com/library/ptex/ptex-teaser-big.png for details ;)

I definitvely in love to the approach because it allows you to have detailed texture for the places you want !

Link : Source

SIGGRAPH 2009 NVIDIA Presentations

If you want of the overview of what Nvidia want to show to developer you have to take a look to those presentation.. It's more marketing slide because they do not compare to other approach... but it's because the slide are corporate one and they do not want to quote ATI stuff...

1. Languages, APIs and Development Tools for GPU Computing  
2. Programming for the CUDA Architecture  
3. Programming in OpenCL  
4. CUDA in the VFX pipeline  
5. Development Tools  
6. The Art of Performance Optimization  
7. Directions in GPU Computing  

You can also see other presentation :


1. Advances In Gpu Based Image Processing,

   With a special interest with corner feature extraction, sift like descriptor and matching, and panoramic  image matching and blending. They do not give detail about the blending method (How many level of laplacian pyramid level they used... We know that a CPU implementation is more efficient for small image size... so does the gain is really impressive at final, or do they use only a Two level blending (High and Low frequency....)

2. 3D Vision Technology - Develop, Design, Play in 3D Stereo
3. Creating Immersive Environments With NVIDIA APEX
4. Alternative Rendering Pipelines on NVIDIA CUDA
5. Efficient Ray Tracing on NVIDIA GPUs (OptiX)
6. Accelerating Realism With the NVIDIA Scene Graph (SceniX)
7. Multi-Layer, Dual-Resolution Screen-Space Ambient Occlusion
8. Real-time Rendering of Efficient Substitutes for Subdivision Surfaces

Source : link

Thursday, January 21, 2010

Some news about Researcher and employment

I'm glad to see that Computer Vision is considered as an important field in company like Microsoft and Google.

Do you know Georg Klein, the author of PTAM ?

This guy now work for Microsoft.

See the note on his main page HERE:

"Oct 2009: Microsoft
As of October 2009, I've moved to Seattle to work for Microsoft's MSN Advanced Engineering team. I will keep maintaining the PTAM sources, but updates to this page will slow and cease."

Another case :

Mr. Yasutaka Furukawa the well know author of PMVS, work now for GOOGLE..

The note :

"I work for Google now, while I am still involved in some academic projects. I hope to provide exciting products to people in the world and prove that computer vision is exciting and useful."

Monday, January 18, 2010

Natal tech details

So what is the core of Natal ?

- Nothing more that a 3D articulated body fitting to a point cloud ?

   From my point of view I would say yes. On the following screen we could see the 3D point cloud captured by the system, and see that the camera give only good 3D point at a given distance (look to the sofa it does not appear on the back plane of the point cloud. But this point is normal. It's just an observation over the hardware elements.)

  For sure they use something to check if the detected articulation movement could be possible for the human body. It's a probabilistic engine that will check which detected part are the more plausible.


Another screens :


Source : link and quoted text from the link

"Step 1: As you stand in front of the camera, it judges the distance to different points on your body. In the image on the far left, the dots show what it sees, a so-called "point cloud" representing a 3-D surface; a skeleton drawn there is simply a rudimentary guess. (The image on the top shows the image perceived by the color camera, which can be used like a webcam.) 

Step 2: Then the brain guesses which parts of your body are which. It does this based on all of its experience with body poses—the experience described above. Depending on how similar your pose is to things it's seen before, Natal can be more or less confident of its guesses. In the color-coded person above [bottom center], the darkness, lightness, and size of different squares represent how certain Natal is that it knows what body-part that area belongs to. (For example, the three large red squares indicate that it’s highly probable that those parts are “left shoulder,” “left elbow” and “left knee"; as the pixels become smaller and muddier in color, such as the grayish pixels around the hands, that’s an indication that Natal is hedging its bets and isn’t very sure of its identity.)

Step 3: Then, based on the probabilities assigned to different areas, Natal comes up with all possible skeletons that could fit with those body parts. (This step isn't shown in the image above, but it looks similar to the stick-figure drawn on the left, except there are dozens of possible skeletons overlaid on each other.) It ultimately settles on the most probable one. Its reasoning here is partly based on its experience, and partly on more formal kinematics models that programmers added in.

Step 4: Once Natal has determined it has enough certainty about enough body parts to pick the most probable skeletal structure, it outputs that shape to a simplified 3D avatar [image at right]. That’s the final skeleton that will be skinned with clothes, hair, and other features and shown in the game.

Step 5: Then it does this all over again—30 times a second! As you move, the brain generates all possible skeletal structures at each frame, eventually deciding on, and outputting, the one that is most probable. This thought process takes just a few milliseconds, so there's plenty of time for the Xbox to take the info and use it to control the game."

Friday, January 15, 2010

A 26 Giga Pixel panorama Cocorico

Kolor the producer of Autopano have released some information about the 26 gigaPixel image that they have captured.

The stitching of the 2346 individual photos (17 rows of 138 photos) was made in November and resulted in a giant image of 26.7 gigapixels: this is about 27 billion pixels! 

Normally, it will be possible to see it soon. Notive the partner on the right of the website (Intel Xeon/Intel Server... so the solution and computation seems to be quite ressources hungry ....)


Source : link

Tuesday, January 12, 2010

Immersive FPS

Some people have done test with pico Projector on a plastic weapon with movement sensor. It give a cool way to play FPS... But you must have a dark room, a white wall and support the large image distortion of the pico projector.

Source : link

The pico Projector used : link

Friday, January 8, 2010

The tech secret of Microsoft Natal

A new article that you can read on TechRadar give details about how Natal is able to recognize the character pose of the player. It seems that a 50MB of postures have been build from motion capture data. On top of that the input image is analyzed over the database and the more probabilistic posture is kept, a temporary tracking seems be applied after to keep time consistency.

For microsoft a body seems to be cut in 31 recognizable part. So the technology seem to be an upgraded silhouette recognition and tracking.

The announced performance are : Natal recognizes up to 31 different body parts in up to 30fps resolution. It will recognize any pose in 10 milliseconds and it only takes 160 milliseconds to detect a new user that steps in front of the camera.

Some websites announce that use Natal tech make a lag of 100 ms compared to classic controller.

Source : link

Source : link


Tuesday, January 5, 2010

Natal and game integration

This video show an integration of Microsoft Natal technology to control the HL2 Gordon player.

I think that the tech is really cool but you will have seasickness, and arm muscle cramp in minutes...

It's interesting to see that the tech is effective, but you can see that manage the two arm to control the use function of game element is not working on the first try...

Source : link

This other video show many games interaction. We see full body tracking. It's interesting to see the Mircrosoft guy moving around the player and the games do not seems to give erroneous movement (So the tech seems robust).

Notice that we do not see a person go ahead the player ! So we do not know all the robustness of the system to temporary occlusion.