Tuesday, August 1, 2006

Visual Search - using a camera and Google?

Here is a quote from the "Cool Hunter":
 
"Designed by a young UK designer, Pei Kang Ng, Google3D is meant as a viable business proposal for Google, five to ten years from now. With Google3D, the idea is literally, to bring the conveniences of the search engine to your fingertips.  Now, you can find out things on the move - wherever and whenever you want to - just by taking a picture. You don't even have to type!"
 
This is Pei Kang Ng site.
 
In my opinion, this device will first appear in the notoriously popular iPod equiped with a camera and wireless link.
 
Note that this concept of visual search is not the same approach as our MVM Visual Search.  Our notion is that you build a model (in 3D or 2D) of what you are looking for to drive your search.  Your model is then translated based on the model's attributes and specific domain knowledge enabling products that don't even have a photo to be found.   The trouble with a photo is that the software has no idea what is important to the user.  Unless you are looking for an exact match.  The other difference is that our approach in feasible today, and not only in 5 to 10 years from now!
 

2 comments:

  1. Very interesting. It's not clear to me though what the underlying concept or approach is for MVM Visual Search. Are you suggesting that consumers will build or create their own 3d models (a la SketchUp or FreeDimension) and then attach some form of metadata that will drive search? Isn't this what Google is doing with 3D Warehouse?

    ReplyDelete
  2. Actually, 3D Warehouse is not at all what MVM Visual Search is about. MVM Visual Search is a totally different patent pending approach to configure a model (of a garment or a product) using a library of domain specific parts. It is not a free form technique like the drawing tool on flickr(http://labs.systemone.at/retrievr/), rather MVM takes advantage of the vast (6 years) knowledge of creating 3D models of garments/products and decomcratizes it by making it accessible to users as a point and click UI. Because the model is "structured" it is then easy for domain experts to automatically generate optimised search strings from the Model attributes. MVM also has millions of 3D models of "real" prodcuts already created. MVM is creating a "standard" to describe features of models in specific domains such as fashion and household products. Humans are better with visuals and computers are beter with semantics! MVM bridges the 2 by allowing users to configure a visual model and then lets the servers talk to each other in a highly structured manner. Perhaps an opening for making the dream of the Semantic Web come true?

    ReplyDelete