The Semantic Web is a thing of data purity and interconnectedness the likes of which we have not seen. However, people are selfish. On top of that, people are lazy and fallible. All of these qualities in humans form a road block that we must get past if we hope to create the Semantic Web. We need to both satisfy the selfishness of man and overcome our error-prone nature if we are to reach our goal.
Published 9 years ago by Yihong Ding
There is a contradiction. The dream of the Semantic Web is beautiful, but few people are willing to realize it initiatively. The reason is primarily due to the pitiful nature of the selfishness of mankind; we prefer to enjoy contributions from others rather than contributing to others in the first hand. Some pessimistic ones of us, such as Stephen Downs and Mor Naaman, had even sentenced the Semantic Web to death due to this reason. Others of us, however, also cannot avoid but only try to solve this contradiction, actively and optimistically.
There's a lot of talk about new search engines and the promising technologies behind them. One technology that has more or less recently been applied to Web search is natural language processing. NLP allows search engines such as Hakia and Powerset to return results based on the query's meaning rather than relying on keyword distribution as a means of identifying relevant Web documents.
The discussion of semantic search has gradually become popular. Just not long time ago, semantic search was thought to be barely a little bit more than a dream. At present, optimistic researchers have started to believe its possibility in the near future. Very recently at Read/WriteWeb, Dr. Riza C. Berkan, the CEO of Hakia (a company declared to perform "semantic search"), posted an article about semantic search that attracted much attention. Despite of agreeing with the post, here are more thoughts about semantic search.
Right now there is more content being created than can be consumed. You might say "but all content gets consumed eventually, by someone." This is generally true and I completely agree. However, how much of that information is consumed by yourself? I will assert that it is a very small slice of the pie. Even if you focus on a single topic, there are simply too many publications. Try searching "Semantic Web" on Technorati or Bloglines to see just what I mean. It's a never ending flow of information. At the Web's current rate of expansion it will become harder and harder to keep up with it all.
Published 9 years ago by James Simmons
It seems as though nothing short of a new buzzword can stop the burst of activity in the vertical search market, and who are we to complain? Vertical search engines differ from their horizontal brethren (who attempt to index the Web as a whole) by focusing on a single topic or niche about which to index information from the Web. Often, a VSE can deliver results with much greater relevancy and accuracy than major horizontal players like Google, Yahoo, and Microsoft.
Published 10 years ago by James Simmons
I have 3 interesting links that you need to check out. The first two are products for discovering and storing metadata, natural language processing, and many more things. The third link goes to a post on Geospatial Semantic Web Blog which gives us an update on Metalink's ability to map its descriptions into RDF.
For just about every area of research there exists documents online describing background information or techniques to accomplish a task in that domain of research. These documents are often referred to as white papers, provided their content is of technical or research orientation. The information held within white papers is essentially accessible by humans only because machines are not able to read and comprehend text in the same way humans can. If machines were able to read white papers and extract information in the same way humans can we would be able to store each fact and piece of knowledge from the documents. This method of indexing would facilitate much more detailed searches, allowing users to search by topic, theory, conclusion, methods, citations, references, etc.
Published 10 years ago by James Simmons
I was reading a blog entry by Matt at PeerPressure that brings up a point worth sharing. One of the biggest problems supporters of the Semantic Web initially faced was, as Matt stated, the classic tech catch 22. His explanation is:
It isn't difficult to imagine that in 10 or even 30 years into the future, the Web will be a dramatically different place. If you look at how quickly we've progressed in the last decade you can see that technology has a way of developing quite rapidly. It has been my observation that Web technology, specifically in the area of Web standards, seems to have always moved slower than other areas of technology. This is due to the immaturity of the medium; the World Wide Web can still be considered in its infancy. Another contributing factor to slow progress has been the difficulty surrounding browser vendors cooperating with each other and following standards properly.