Month: March 2009
Here is Tim Berners Lee explaining the vision of semantic web… let’s hear him & discuss what you understood.
This guy explains concepts of semantic web in very easy way… try to get vision from this video then following the discussion…
The Resource Description Framework (RDF), developed under the sponsorship of the World Wide Web Consortium (W3C), is an infrastructure that enables the encoding, exchange, and reuse of structured metadata.
This infrastructure enables metadata interoperability through the design of mechanisms that support common conventions of semantics, syntax, and structure. RDF does not specify semantics for each resource description community, but rather provides the ability for these communities to define metadata elements as needed. RDF uses XML (eXtensible Markup Language) as a common syntax for the exchange and processing of metadata. The XML syntax is a subset of the international text processing standard SGML (Standard Generalized Markup Language) specifically intended for use on the Web. Read the rest of this entry »
A URI is simply a Web identifier, like the strings starting with http or ftp that you often see on the World Wide Web. Anyone can create a URI, and the ownership of URIs is clearly delegated, so they form an ideal base technology on top of which to build a global Web. In fact, the World Wide Web is such a thing: anything that has a URI is considered to be “on the Web.” Every data object and every data schema/model in the Semantic Web must have a unique URI.
A Uniform Resource Locator (URL) is a URI that, in addition to identifying a resource, provides a means of acting upon or obtaining a representation of that resource by describing its primary access mechanism or Read the rest of this entry »
The Semantic Web is generally built on syntaxes which use URIs to represent data, usually in triples based structures: i.e. many triples of URI data that can be held in databases, or interchanged on the world Wide Web using a set of particular syntaxes developed especially for the task. These syntaxes are called “Resource Description Framework” syntaxes.
Based on the metadata, intelligent applications such as semantic portals can be created. Metadata creation includes two major parts. First, the ‘Ontologies’ and vocabularies used as the basis in metadata descriptions are defined. Second, the web resources are annotated with metadata conforming to the definitions. Read the rest of this entry »
Short Intro: In this blog I am going to cover simplest intro to Semantic Web’s concept, It’s needs and the way we will be going to adopt it… So have a nice journey to this blog.
Why we need a new technology?
Data is present everywhere on web. That is generally hidden in HTML files and is often useful in some situations, but not in others. The problem with the majority of data on the Web that is in this form at the moment is that it is difficult to use on a large scale, because there is no global system for publishing data in such a way as it can be easily processed by anyone. As an example, the information about weather forecast, local sports events, plane schedules, Major League Football statistics, and TV channel guides… all of this information is presented by numerous sites, but all in HTML. The problem with that is, in some contexts, it is difficult to use this data in the ways that one might want to have it in a form. So the Semantic Web can be seen as a huge engineering solution… but it is more than that. We will find that as it becomes easier to publish data in a re-purposable form, so more people will want to publish data, and there will be a incidental and effect. We may find that a large number of Semantic Web applications can be used for a variety of different tasks, increasing the modularity of applications on the Web.
The concept of Semantic Web was first thought up by Tim Berners-Lee, the inventor of WWW, URIs, HTTP. And now Semantic Web is under research phase all over the world. Read the rest of this entry »
The web which is visible to us without any login is know as surface web.
Surface web is also know as appreant web or that pages whos data is clearly readable by just one click. SE Bots read these page very easily.
Deep Web, is also know as ‘Hidden Web’, ‘Dark Web’, or the ‘Invisible Web’. It is mostly generated by database-driven pages that are usually visible to authorized members. Who can access the hidden information after logging into the system.
Such data is totally hidden from search enging crawlers (or the ‘spiders’), that means Google, Yahoo, MSN, Altavista and other search endinges cannot search data from these pages. Read the rest of this entry »
You would have probably heard about SEO friendly URLs, but would have never gone through SEO enemy URLs. Some of you might be thinking the URLs which are not SEO friendly are considered at SEO-Enemy URLs! No, this is not at all true. Consider following URLs…
a) http://www.example-url.com/articles/2009/01/14/ (Excellent)
b) http://www.example-url.com/articles.php?year=2009&month=01&day=14 (Not bad, will work)