Technological changes, whether it has something to do with computers, manufacturing, agriculture or transportation, can happen very dramatically. However abrupt these changes may seem to us several years back, to those going through these changes they could actually seem quite slow and unnoticeable. Take for example the rise of the internet. The worldwide web was developed in 1989. However, the core technology to make the worldwide web possible, TCP/IP protocol, was actually invented back in the 60′s during the Cold War. When these changes were happening, most people thought it was just the realm of geeks, nerds and other techno-savvy people that were in the fringes. Now, our culture is quickly revolving around online and internet developments. It is that powerful. This is the reason why, when considering changes in technology on the internet, it pays, both from the standpoint of investing in technology stocks but also in terms of a consumer of technology, to pay attention to these slight changes.
The smallest and most innocent looking choice regarding technology protocol or a standard for an emerging technology can actually mean the difference between millions of jobs and stock market profits or dead ends. Many choices had to be made in the invention and propagation of the internet. Thankfully, a lot of these changes were built on open source and free software that is why it spread as fast as it did. The same types of decisions are currently being made for the next wave in the evolution of the internet, WEB 3.0.
What is WEB 3.0?
Just like with any new technology or paradigm shift, the best way to define something new is to define what came before it or to define what is not. To understand WEB 3.0, we have to first back track and look at WEB 2.0. What is WEB 2.0? WEB 2.0 is the evolution of internet technology towards social created content. This was just a pipe dream back in the early 2000 when this term was being discussed. Everybody was saying this term, but just like with WEB 3.0 now, not everybody fully understood what it meant and if they did agree to definition, not everybody agreed as to the scope of that definition.
Web 2.0 Defined as User-Generated Content
Simply put, WEB 2.0 is user-generated content. It is content generated by the community. The most obvious example of this is a pre-WEB 2.0 invention, the online message board. One would log in to a message board and they would post something and people would respond. It is very straightforward, very flat and very predictable. The innovation of WEB 2.0 is that it turned this sequential community content generated activity into a live feed. When you log in to your Facebook wall and you post a status update, your friends would then respond. Not only do you get a copy of their response, but everybody else that visits your wall will see what the response was. That person on their status page would also have a copy of their response. This is nothing more an explosion and syndication of old school message board.
The impact, however, is that the old marketing holy grail of viral marketing turned into reality because if somebody responded to your wall post, then your friends saw the response, they can then comment back so their walls would have a copy of the message and their friends will see that and comment again and again. The process keeps repeating itself. However, with each person that is affected or is able to view this discussion, the sphere of influence expands exponentially because those people have followers of their walls as well because they have friends. Unlike the old message board system where only a few people can see the message and it is really slow for the message to get out, because Facebook and Twitter operate using this model of spheres of influence and friends list or follower list, a message from a virtual nobody can easily explode all of the world because enough people repeat the message to their own personal sphere of influence. That is the power of WEB 2.0.
Getting to WEB 2.0, however, there were many stumbles and half-baked ideas. There were many poorly developed iterations until the idea fully blossomed into its current form. For example is MySpace and Friendster. They grew by leaps and bounds originally until they ran out of steam because they did not get the correct social networking equation down, which is how do you automate a sphere of influence so that viral messaging becomes fully automated. This is a very helpful case study of people trying to fully define what WEB 3.0 is because by definition, looking at something that is happening now and would fully bloom in the future, we do not have in front of us, a fully defined and outlined concept. We only get glimmers.
Definition 1. Blurring online and offline worlds
One popular definition of WEB 3.0 is that it would be a merging of offline and online worlds. Online information will follow you while you are walking around in the real world. An early adaptation of this definition of WEB 3.0 are smart phones tied to GPS. When you are in a certain area and you run a search for local restaurants, differing local restaurants will appear. This can easily be integrated with WEB 2.0 because user reviews of those nearby restaurants can then be fed in to your mobile phone so you can see what other users think of these restaurants. You can then make a more informed decision this way. Another implementation of this definition of WEB 3.0 is using your mobile phone as a translator. You would look at a sign and then you would point your tablet or your mobile phone at the sign and it would translate it in real time. Again the information is pulled from the Internet. Finally, another key implementation is when you are online and you generate content on one device and then you go offline and access another device. Once you get online with that device, it all syncs in together so your documents are always up to date and you are always connected. Another term for this definition of WEB 3.0 is augmented reality. However, augmented reality has more to do with mobile phones and tablets. Your actions offline become content that can be fed in to online sources of data which can then be split back to use so you could make decisions in the real world. This is more of a device influenced definition that it definitely has a place within the construction of WEB 3.0 as the merging of online and offline worlds.
Definition 2. Virtual reality as WEB 3.0.
Another alternative definition of WEB 3.0 has less to do with real world inputs pulling online data to make real world decision and more to do with creating a virtual world where real world decisions can be made. For example, Second Life is a virtual world where people spend real money for virtual goods. While this is strictly within the realm of a virtual game or simulation currently, virtual reality proponents of WEB 3.0 see this trend expanding tremendously to the extent that people begin to use avatars in cyberspace and the virtual economy, virtual goods, virtual services would become as important, if not more important, than real world economy.
You can start seeing this play out in MMORPGs like World of Warcraft. Sure it looks like it is just a game, but it has its virtual community. People live their virtual lives in the game and they trade real money for virtual goods. The trade in Warcraft-related goods total to several million real world dollars annually, if not more. It is a shadowy world because it is against the terms of service for World of Warcraft, so the actual dollar figure cannot truly be accurately estimated. However, it can total up to several hundred million dollars annually. That is how lucrative visual worlds are and WOW and Half Life are just the beginning. Virtual worlds can extend to a wide range of niches in human concerns, for example, parenting, occupations, real world geographic regions, you name it. Whatever sub-niche of social interaction you can think of, a virtual world can be created. Think of it as social networking but exploded into its own virtual economy and heavily enhanced social interaction regarding a particular subject matter.
Definition 3. WEB 3.0 as a semantic web
No other person than the inventor of the worldwide web, Tim Berners-Lee, said that WEB 3.0 would be all about the semantic web. What is the semantic web? It is a virtual location where search engines and data machines process web data and compile this into a form that people can process. As the years roll on, this idea has morphed and blended into the prior WEB 2.0 technology. While it is true that millions of web pages are still currently being generated by individual authors, companies, institutions and organizations the world over, the rate of content production by individual users interacting with their social networks is even more shocking.
The problem originally with search engines is that anybody in the world can put a web page and put up material in that web page. There is no way to vouch for the accuracy, the usefulness and the value of this content. Early iterations of search engines were blind. All that they could look at was the actual content of the page and they would guess if a certain word is repeated at a certain amount of times on a page, and that page truly is about that word. The sad consequence of this generation of search technology is that internet quickly became filled with affiliate-generated pages for pornography. Search engines like AltaVista, Infoseek, Lycos, HotBot and other early generation search engines were quickly filled with useless search results because when you click through, you saw pornography. There is heavy economic incentive then as now to generate as much traffic to porn-affiliate pages and that when the traffic clicks through to the advertiser and makes a purchase, the person who built that page makes money. That was the trap that snared this early wave of search engines.
The next generation characterized by Inktomi-BING and of course Google looked at the links that pointed to a certain page and what commonalities those links had. The theory is just like in real life, if people link to a certain page and said that that page is about cats, the more links that page gets saying the word cats, the higher the likelihood that that page is about cats. This is so simple because it mirrors real life. If enough people point at you and say you are John and a person goes to the people pointing and says, “Where is John?” and look at what they are pointing to, chances are quite high that your name is John. The problem with this generation in search technology was that it is easy to beat. SEO specialist the world over immediately built software that would find places on the internet that would publish user-generated content and start flooding the internet with junk pages that had links that would spam a particular site. More insidiously many would just straight up buy links. How does this work? They would pay a web page owner to link to their website, nice and simple. Eventually Google, Yahoo! and BING search results became uneven. While there are still better than the previous generation of search engines, there are still constantly locked in this arm struggle, this evolutionary struggle with spammers.
The rise of WEB 2.0 truly made available a powerful new weapon to search engines. This powerful weapon is a community-based credibility. If enough people share the same link among their friends, labeling the link as a particular type of content, Google is betting on two things happening. First, that you are not lying to your friends and not sharing a link saying “cats” when the link actually goes to a page about turtles. Second, you have a vested interest in staying truthful because if you shared garbage with your friends, they would eventually shut you off. With the rise of social networks like Twitter and Facebook, search engines found an answer to the old trust problem. Links can be faked. Social trust is harder to fake. There are more numbers involved and there are more “moving parts” involved. It cannot easily be tweaked by just one spammer running a specialized piece of software the way it could be with links. Also it is harder to buy, unlike people buying links for their website.
To take Tim Berners-Lee’s definition, we have to morph that a little bit with the recent rise of social networks. However, the underlying thesis of his definition of WEB 3.0 still holds. WEB 3.0 is the arena where data generation and social interaction produces pure and more trustworthy data. In essence, online data becomes more trustworthy. However, that is just one level of the analysis. WEB 3.0 does not stop there, although it is quite an accomplishment and achievement already. Who would not appreciate cleaner search results?
Definition 4. WEB 3.0 as socialized knowledge generation
Wikipedia is a very powerful example of WEB 3.0. It is a store house for publicly generated body of knowledge. There are a lot of checks and balances amount the community in that a lot of misinformation, while it can get published, does not stay on Wikipedia for long because there are several waves of community editing. The information that tends to remain is information that pleases the most people in the Wikipedia community.
The problem with Wikipedia’s content generation is that there are politics and internal hierarchies that come in to play so that issues of bias and partiality and hidden agendas cannot be avoided. In the long run, this may degrade the editorial quality of Wikipedia. That is why primary value to responsible researchers is first stage in research. You do not begin and end your research with Wikipedia. It is a great place to start your research, but by no means end there. However limited Wikipedia may be, it does point to a possible direction for WEB 3.0 semantic web’s aspect.
User content generation is relentless at Facebook, Twitter and other social network sites. If Tim Berners-Lee’s social search concept for WEB 3.0 fully blooms, we see all this information sucked in to the center of this information processing technology and then threw the behavior of the community itself, that information is parsed, classified and processed to a form where people can then get the information and trust it. In a sense, the community generates the information and the community then polices itself and the search engines just become harvesters of this process. Since it is an organic way to generate information and knowledge, the people themselves cannot be counted on to police themselves voluntarily and consciously. For this reason, the concept of a central organizing technology comes in. Google already has this in place with its caffeine update. It is still very rudimentary, clunky and producing very mixed and rough results. However, the glimmers are there of a centralized organizing principle where globally created communal knowledge is then passed through crowd-based editing solutions, again not through any conscious actions, but definitely intentional individual actions which is then used to purify this knowledge.
The Bottom Line
There is an old saying, “The future is so bright I have to wear sunglasses.” Sometimes the future is so intense that we cannot see its exact outlines. However, we can see its general shape and that is what exactly is happening now. The impact of WEB 3.0, whether in the form of its augmented reality, virtual reality definition, or its semantic web definition, will be profound. It will change how we interact with people online and offline because psychological conceptions of interpersonal relations and sense of identity can be tremendously impacted by these technologies. Similarly, how we look for and generate knowledge and content is in for some dramatic changes. Just the same way we said goodbye to porn or other worthless junk results when we were doing research for high school homework, we can now say goodbye to purely commercial pages that rank highly due to the SEO genius of that page’s creator. The information would be real time, it would be more trustworthy and it would exactly what we need. We are still several leaps away from extremely intelligent and concise search technology, but due to the rise of social networking and social information, that day is going to happen sooner rather than later.
Finally, if WEB 3.0 evolves more fully towards augmented reality, we stand to see the lines between the offline world and the online world becoming invisible. Decisions in the offline world in real time would pull in online data and online realities would then be pulled in offline decision making and actions. Indeed the collision course between real world reality and WEB 2.0 social interaction needs no better illustration than the current revolutions happening in the Middle East. The Arab Spring is the fruit of WEB 2.0 technology. It truly scares me while giving me quite a bit of hope what revolutions WEB 3.0 would make possible.