Ontology in a virtual world
Project Overview
Summary of Project
In the past ten years, artificial intelligence researchers have been developing the semantic web by augmenting web pages with structured information so machines can greasonh about web pages. This project aims to do the same thing with the real world and 3D virtual worlds. In both the real world and in 3D virtual worlds, a human can recognize a door or a castle but a machine may not have explicit labels and so cannot associate additional semantic information with gthings.h 3D virtual worlds like Second Life provide a good starting point for this research since, in such worlds, every object has an explicit identity, a location, and an owner, and objects can have a user-supplied label indicating the objectfs type. The objective of this project it to determine how to build an ontology (knowledge representation) for data collected by a 3D Virtual World search engine (developed by UA PhD student Josh Eno in the CSCE Department). Our approach: analyzing data with various techniques (neural networks, decision tree, self-organizing map). Using these pattern-matching techniques, a locational nearness pattern can identify which objects are often found near other objects. We do not have complete set labeling, but we can use the objects and locations that are tagged with names to train classifiers to tag things that do not have names. Such an ontology would enable queries like
Find X-ray machines within 50 meters of wheelchairs.
Find all regions of land containing health care objects.
Find regions similar to region University of Arkansas island.
We may be the first research team to try to develop an ontology service for annotating virtual objects with labels that have associated properties. If we succeed, we will also aim toward extending this work to labeling real world objects and then associating relevant information with them. In the future, a person will use a cell phone to point at an object with an RFID tag and will then download associated information from the ontology service about that object.
Background
In my sophomore year, I learned about 3D virtual worlds (like Second Life) in a course on Artificial Intelligence offered by Dr. Craig Thompson in the Computer Science and Computer Engineering Department. His Everything is Alive project at the University of Arkansas initially focused on RFID middleware but we realized we could use 3D virtual worlds to model pervasive computing and the future Internet of Things. Recent papers have begun to generalize the Internet of Things to explore frameworks for smart objects that identify some of the attributes that make an object smart. At the same time, the 3D virtual world community has developed Second Life, Open Wonderland, Open Cobalt, and others, which anyone worldwide can easily learn how to use. Our work differs in that we use 3D virtual world technology to identify, construct and demonstrate smart world protocols in an understandable manner. Potential advantages of using 3D virtual worlds to understand a future smart semantic world are: modeling is low cost when compared to developing and deploying real world technologies, and it may be that modular services that we develop for interoperating with virtual worlds will transfer more or less directly to the real world. In class, we discussed how to use virtual worlds to model and understand a future real world where computing is ubiquitous . where every object has identity, can communicate, and is, to some degree, a “smart” object. My class project expanded these ideas and resulted in a demonstration and two papers.
Problem
In the real world, a chair does not know it is a chair. Similarly, in virtual worlds, 3D objects may contain graphical models but may be missing functional descriptions so again a virtual chair may not know it is a chair. While it is easy for humans to rapidly identify things in a scene, a computer can be challenged to parse a scene into separate objects of varying types. Can we find ways to explicitly label objects and then associate additional metadata?
Objective
This honors research grant proposes to extend our work on individual smart objects to provide a way to label and organize objects in a virtual world. Artificial intelligence uses the term “ontology” to refer to a computational knowledge representation that enables a computer to identify and reason about a collection of related things. Therefore, similar to the way researchers are developing the semantic web by adding metadata to web objects, we propose adding ontology metadata to objects in a 3D virtual world to make them smart(er) objects. Such ontologies would enable queries like:
Find X-ray machines within 50 meters of wheelchairs.
Find all regions of land containing health care objects.
Find regions similar to region University of Arkansas island.
Approach
The first step in building an ontology is to analyze the data we have and find what kind of knowledge we can obtain from the data. This requires techniques from data mining. Therefore, I will start by becoming familiar with classifiers, including k-Nearest Neighbor, neural networks, decision trees, cosine similarity, and support vector machines. Then, I will apply appropriate text classification techniques to the data to find patterns and mine useful information. For example, our collected data includes information on objects in virtual worlds like xyz coordinate location, name, and owner. Using the techniques mentioned above, a locational nearness pattern can identify which objects are often found near other objects. Even if we do not have a complete set of information, we can use the objects and locations that are tagged with names to train the classifiers to tag things that do not have names. Identifying such objects can increase the probability of finding related objects, which narrows the search space when we are trying to identify nearby but unlabeled objects. This information would be organized into a semantic network, also known as an ontology graph.
Once this analysis is done, visualization and utilization of the obtained information would be the next process. Visualization of the obtained data helps people understand data more easily. Also, we may be able to use the information in the real world. For example, the information would make it possible to predict what kind of facilities we are in just by looking at some objects around us (e.g., an office, a bathroom, a class room, a hospital room, a restaurant). Also, it will help people to find objects of interest if we know they are often found in certain places.
Potential Impact
In future work, I am interested in further understanding how to use what we are learning in modeling virtual worlds to model the real world with the addition of pervasive computing. One idea is to add an RFID reader to a cell phone, then use the cell phone as a real world search bot to gather information about any RFID-tagged object the person passes by . just as in Second Life, we could then keep track of inventories of objects we pass by, their identities, types, and locations. If every cell phone user used this facility, it could result in a continuously updated inventory of the real world. Further work involving cell phones, GPS locations of pictures, image processing technologies like image stitching could lead toward rapid, continuous modeling of the world so that our 3D models were kept up to date as the real world changes.
Updates
Building Self-organizing map
2010/06/29
Self-organizing map (SOM) is a type of neural net model using unsupervised learning. It is also called Kohonen feature map by taking the inventor’s name.
SOM compresses the information in multi-dimensional data into low dimensional data (usually 2D) to draw a map. This makes a complex data visible for us. SOM is composed of 2 layers whereby first layer receives input and the data flow to the second layer. In the second layer, competitive learning of similarity between input and connection weight is repeated until it gets to the specified iteration. As a result, neighbors in the map tend to have a similar weight.
For the testing purpose, I use this technique to visualize a simple data from Second life, which include Name, and x/y/z coordinates. I eliminated the objects named “Objects” because it is the default name of a prim and does not mean anything. This should show the closest located objects be gotten together.
This shows how SOM learned the similarity over the iteration. It seems it reaches to good similarity until 200 iterations.
This is a map I get this time. This simple example overlay the object information located in a three-dimensional world into two-dimensional map. This technique can be applied to more complex data. I used absolute coordinate to organize this map this time, but if I use relative coordinate between objects, it will show the tendency of closeness of types of objects. In that case, I would have to reorganize the database first because in the world, same object can be called in different way. The way I’m planning to do is to retrieve the definition of each name of object from the Internet, and using LSA (latent semantic analysis) to categorize those words into more general way. (to tell a hat and a cap are the same). Then, I will calculate the special closeness of objects around an object.