Free Web Software - Open Source

SEO and Internet Marketing

AdSense

AdSense is an ad serving program run by Google. Website owners can enroll in this program to enable text, image and, more recently, video advertisements on their sites. These ads are administered by Google and generate revenue on either a per-click or per-thousand-impressions basis. Google is also currently beta-testing a cost-per-action based service.

Google utilizes its search technology to serve ads based on website content, the user's geographical location, and other factors. Those wanting to advertise with Google's targeted ad system may sign up through AdWords. AdSense has become a popular method of placing advertising on a website because the ads are less intrusive than most banners, and the content of the ads is often relevant to the website.

Currently, the AdSense uses JavaScript code to incorporate the advertisements into a participating site. If it is included on a site which has not yet been crawled by the Mediabot, it will temporarily display advertisements for charitable causes known as public service announcements (PSAs). (Note that the Mediabot is a separate crawler from the Googlebot that maintains Google's search index.)

Many sites use AdSense to monetize their content and some webmasters work hard to maximize their own AdSense income. They do this in three ways:
They use a wide range of traffic generating techniques including but not limited to online advertising.
They build valuable content on their sites which attracts AdSense ads which pay out the most when they get clicked.
They use copy on their websites that encourage clicks on ads. Note that Google prohibits people from using phrases like "Click on my AdSense ads" to increase click rates. Phrases accepted are "Sponsored Links" and "Advertisements".

The source of all AdSense income is the AdWords program which in turn has a complex pricing model based on a Vickrey second price auction, in that it commands an advertiser to submit a sealed bid (not observable by competitors). Additionally, for any given click received, advertisers only pay one bid increment above the second-highest bid.

Webmasters and search engines ?

By 1997 search engines recognized that some webmasters were making efforts to rank well in their search engines, and even manipulating the page rankings in search results. Early search engines, such as Infoseek, adjusted their algorithms to prevent webmasters from manipulating rankings by stuffing pages with excessive or irrelevant keywords.[12]

Due to the high marketing value of targeted search results, there is potential for an adversarial relationship between search engines and SEOs. In 2005, an annual conference, AIRWeb, Adversarial Information Retrieval on the Web,[13] was created to discuss and minimize the damaging effects of aggressive web content providers.

SEO companies that employ overly aggressive techniques can get their client websites banned from the search results. In 2005, the Wall Street Journal profiled a company, Traffic Power, that allegedly used high-risk techniques and failed to disclose those risks to its clients.[14] Wired reported the same company sued a blogger for mentioning that they were banned.[15] Google's Matt Cutts later confirmed that Google did in fact ban Traffic Power and some of its clients.[16]

Some search engines have also reached out to the SEO industry, and are frequent sponsors and guests at SEO conferences and seminars. In fact, with the advent of paid inclusion, some search engines now have a vested interest in the health of the optimization community. Major search engines provide information and guidelines to help with site optimization.[17][18][19] Google has a Sitemaps program[20] to help webmasters learn if Google is having any problems indexing their website and also provides data on Google traffic to the website. Yahoo! Site Explorer provides a way for webmasters to submit URLs, determine how many pages are in the Yahoo! index and view link information.[21]

Getting listings

The leading search engines, Google, Yahoo! and Microsoft, use crawlers to find pages for their algorithmic search results. Pages that are linked from other search engine indexed pages do not need to be submitted because they are found automatically. Some search engines, notably Yahoo!, operate a paid submission service that guarantee crawling for either a set fee or cost per click.[22] Such programs usually guarantee inclusion in the database, but do not guarantee specific ranking within the search results.[23] Yahoo's paid inclusion program has drawn criticism from advertisers and competitors.[24] Two major directories, the Yahoo Directory and the Open Directory Project both require manual submission and human editorial review.[25] Google offers Google Sitemaps, for which an XML type feed can be created and submitted for free to ensure that all pages are found, especially pages that aren't discoverable by automatically following links.[26]

Search engine crawlers may look at a number of different factors when crawling a site. Not every page is indexed by the search engines. Distance of pages from the root directory of a site may also be a factor in whether or not pages get crawled.[27]

Preventing listings
Main article: robots.txt

To avoid undesirable search listings, webmasters can instruct spiders not to crawl certain files or directories through the standard robots.txt file in the root directory of the domain. Additionally, a page can be explicitly excluded from a search engine's database by using a meta tag specific to robots. When a search engine visits a site, the robots.txt located in the root directory is the first file crawled. The robots.txt file is then parsed, and will instruct the robot as to which pages are not to be crawled. As a search engine crawler may keep a cached copy of this file, it may on occasion crawl pages a webmaster does not wish crawled. Pages typically prevented from being crawled include login specific pages such as shopping carts and user-specific content such as search results from internal searches. In March 2007, Google warned webmasters that they should prevent indexing of internal search results because those pages are considered search spam.

Search engine optimization history

Webmasters and content providers began optimizing sites for search engines in the mid-1990s, as the first search engines were cataloging the early Web. Initially, all a webmaster needed to do was submit a page, or URL, to the various engines which would send a spider to "crawl" that page, extract links to other pages from it, and return information found on the page to be indexed.[1] The process involves a search engine spider downloading a page and storing it on the search engine's own server, where a second program, known as an indexer, extracts various information about the page, such as the words it contains and where these are located, as well as any weight for specific words, as well as any and all links the page contains, which are then placed into a scheduler for crawling at a later date.

Site owners started to recognize the value of having their sites highly ranked and visible in search engine results, creating an opportunity for both white hat and black hat SEO practitioners. According to industry analyst Danny Sullivan, the earliest known use of the phrase "search engine optimization" was a spam message posted on Usenet on July 26, 1997.[2]

Early versions of search algorithms relied on webmaster-provided information such as the keyword meta tag, or index files in engines like ALIWEB. Meta-tags provided a guide to each page's content. But using meta data to index pages was found to be less than reliable, because some webmasters abused meta tags by including irrelevant keywords to artificially increase page impressions for their website and to increase their ad revenue. Cost per thousand impressions was at the time the common means of monetizing content websites. Inaccurate, incomplete, and inconsistent meta data in meta tags caused pages to rank for irrelevant searches, and fail to rank for relevant searches.[3] Web content providers also manipulated a number of attributes within the HTML source of a page in an attempt to rank well in search engines.[4]

By relying so much on factors exclusively within a webmaster's control, early search engines suffered from abuse and ranking manipulation. To provide better results to their users, search engines had to adapt to ensure their results pages showed the most relevant search results, rather than unrelated pages stuffed with numerous keywords by unscrupulous webmasters. Search engines responded by developing more complex ranking algorithms, taking into account additional factors that were more difficult for webmasters to manipulate.

While graduate students at Stanford University, Larry Page and Sergey Brin developed a search engine called "backrub" that relied on a mathematical algorithm to rate the prominence of web pages. The number calculated by the algorithm, PageRank, is a function of the quantity and strength of inbound links.[5] PageRank estimates the likelihood that a given page will be reached by a web user who randomly surfs the web, and follows links from one page to another. In effect, this means that some links are stronger than others, as a higher PageRank page is more likely to be reached by the random surfer.

Page and Brin founded Google in 1998. Google attracted a loyal following among the growing number of Internet users, who liked its simple design.[6] Off-page factors such as PageRank and hyperlink analysis were considered, as well as on-page factors, to enable Google to avoid the kind of manipulation seen in search engines that only considered on-page factors for their rankings. Although PageRank was more difficult to game, webmasters had already developed link building tools and schemes to influence the Inktomi search engine, and these methods proved similarly applicable to gaining PageRank. Many sites focused on exchanging, buying, and selling links, often on a massive scale. Some of these schemes, or link farms, involved the creation of thousands of sites for the sole purpose of link spamming.[7]

To reduce the impact of link schemes, as of 2007, search engines consider a wide range of undisclosed factors for their ranking algorithms. Google ranks sites using more than 200 different signals.[8] The three leading search engines, Google, Yahoo and Microsoft's Live.com, do not disclose the algorithms they use to rank pages. Notable SEOs, such as Rand Fishkin, Barry Schwartz, Aaron Wall and Jill Whalen, have studied different approaches to search engine optimization, and have published their expert opinions in online forums and blogs.[9][10] SEO practitioners may also study patents held by various search engines to gain insight into the algorithms.

Search engine optimization ?

From Wikipedia, the free encyclopedia

“SEO” redirects here. For other uses, see SEO (disambiguation).

Search engine optimization (SEO) is the process of improving the volume and quality of traffic to a web site from search engines via "natural" ("organic" or "algorithmic") search results. Usually, the earlier a site is presented in the search results, or the higher it "ranks," the more searchers will visit that site. SEO can also target different kinds of search, including image search, local search, and industry-specific vertical search engines.

As a marketing strategy for increasing a site's relevancy, SEO considers how search algorithms work and what people search for. SEO efforts may involve a site's coding, presentation, and structure, as well as fixing problems that could prevent search engine indexing programs from fully spidering a site. Other, more noticeable efforts may include adding unique content to a site, and making sure that the content is easily indexed by search engines and also appeals to human visitors.

The acronym "SEO" can also refer to "search engine optimizers," a term adopted by an industry of consultants who carry out optimization projects on behalf of clients, and by employees who perform SEO services in-house. Search engine optimizers may offer SEO as a stand-alone service or as a part of a broader marketing campaign. Because effective SEO may require changes to the HTML source code of a site, SEO tactics may be incorporated into web site development and design. The term "search engine friendly" may be used to describe web site designs, menus, content management systems and shopping carts that are easy to optimize.

Web hosting service 3

Types of hosting

A typical server "cage," commonly seen in colocation centres.

Internet hosting services can run Web servers; see Internet hosting services.

Hosting services limited to the Web:
Free web hosting service: is free, (sometimes) advertisement-supported web hosting, and is often limited when compared to paid hosting.
Shared web hosting service: one's Web site is placed on the same server as many other sites, ranging from a few to hundreds or thousands. Typically, all domains may share a common pool of server resources, such as RAM and the CPU. A shared website may be hosted with a reseller.
Reseller web hosting: allows clients to become web hosts themselves. Resellers could function, for individual domains, under any combination of these listed types of hosting, depending on who they are affiliated with as a provider. Resellers' accounts may vary tremendously in size: they may have their own virtual dedicated server to a colocated server.
Virtual Dedicated Server: slicing up a server into virtual servers. each user feels like they're on their own dedicated server, but they're actually sharing a server with many other users. The users may have root access to their own virtual space.
Dedicated hosting service: the user gets his or her own Web server and gains full control over it (root access for Linux/administrator access for Windows); however, the user typically does not own the server. Another type of Dedicated hosting is Self-Managed or Unmanaged. This is usually the least expensive for Dedicated plans. The user has full administrative access to the box, which means the client is responsible for the security and maintenance of his own dedicated box.
Managed hosting service: the user gets his or her own Web server but is not allowed full control over it (root access for Linux/administrator access for Windows); however, they are allowed to manage their data via FTP or other remote management tools. The user is disallowed full control so that the provider can guarantee quality of service by not allowing the user the modify the server or potentially create configuration problems. The user typically does not own the server. The server is leased to the client.
Colocation web hosting service: similar to the dedicated web hosting service, but the user owns the colo server; the hosting company provides physical space that the server takes up and takes care of the server. This is the most powerful and expensive type of the web hosting service. In most cases, the colocation provider may provide little to no support directly for their client's machine, providing only the electrical, Internet access, and storage facilities for the server. In most cases for colo, the client would have his own administrator visit the data center on site to do any hardware upgrades or changes.
Clustered hosting: having multiple servers hosting the same content for better resource utilization.

Some specific Web services:
File hosting service: hosts not web pages but files
Image hosting service
Video hosting service
Blog hosting service
One-click hosting
Shopping cart software

Web hosting service 2

Hosting uptime refers to the percentage of time the host is accessible via the internet. Many providers state that they aim for a 99.9% uptime, but there may be server restarts and planned (or unplanned) maintenance in any hosting environment.

A popular claim from the popular hosting providers is '99% or 99.9% server uptime' but this often refers only to a server being powered on and doesn't account for network downtime. Real downtime can potentially be larger than the percentage guaranteed by the provider. Many providers tie uptime, and accessibility, into their own Service Level Agreement, or SLA. SLAs may or may not include refunds, or reduced costs if performance goals are not met. One must be extremely careful when selecting a new company and they should read all terms and conditions carefully. A potential customer should also check out the webhosting company's Acceptable Use Policy AUP in order to avoid potential cancellation of services due to activities that are considered a violation.

Web hosting service 1

From Wikipedia, the free encyclopedia

A web hosting service is a type of Internet hosting service that allows individuals and organizations to provide their own websites accessible via the World Wide Web. Web hosts are companies that provide space on a server they own for use by their clients as well as providing Internet connectivity, typically in a data center. Webhosts can also provide data center space and connectivity to the Internet for servers they do not own to be located in their data center, called colocation.


Service scope

The scopes of hosting services vary widely. The most basic is webpage and small-scale file hosting, where files can be uploaded via File Transfer Protocol (FTP) or a Web interface. The files are usually delivered to the Web "as is" or with little processing. Many Internet service providers (ISPs) offer this service for free to their subscribers. People can also obtain Web page hosting from other, alternative service providers. Personal web site hosting is typically free, advertisement-sponsored, or cheap. Business web site hosting often has a higher expense.

Single page hosting is generally sufficient only for personal web pages. A complex site calls for a more comprehensive package that provides database support and application development platforms (e.g. PHP, Java, and ASP.NET). These facilities allow the customers to write or install scripts for applications like forums and content management. For e-commerce, SSL is also required.

The host may also provide an interface control panel (e.g. cPanel, Hosting Controller, Plesk or others) for managing the Web server and installing scripts as well as other services like e-mail. Control panels and web interfaces have been causing some controversy lately as Web.com claims that it holds patent rights to the hosting technology with its 19 patents. Hostopia, a large wholesale host, recently purchased a license to use that technology from web.com for 10% of retail revenues[1]. Web.com recently sued Go Daddy as well for similar patent infringement [2].

Some hosts specialize in certain software or services (e.g. e-commerce). They are commonly used by larger companies to outsource network infrastructure to a hosting company. To find a web hosting company, there are searchable directories that can be used. One must be extremely careful when searching for a new company due to the fact that many of the people promoting service providers are actually affiliates and the reviews are biased.

Website Planning ?

Before creating and uploading a website, it is important to take the time to plan exactly what is needed in the website. Thoroughly considering the audience or target market, as well as defining the purpose and deciding what content will be developed are extremely important.


Purpose

It is essential to define the purpose of the website as one of the first steps in the planning process. A purpose statement should show focus based on what the website will accomplish and what the users will get from it. A clearly defined purpose will help the rest of the planning process as the audience is identified and the content of the site is developed. Setting short and long term goals for the website will help make the purpose clear and plan for the future when expansion, modification, and improvement will take place. Also, goal-setting practices and measurable objectives should be identified to track the progress of the site and determine success.


Audience

Defining the audience is a key step in the website planning process. The audience is the group of people who are expected to visit your website – the market being targeted. These people will be viewing the website for a specific reason and it is important to know exactly what they are looking for when they visit the site. A clearly defined purpose or goal of the site as well as an understanding of what visitors want to do/feel when they come to your site will help to identify the target audience. Upon considering who is most likely to need/use the content, a list of characteristics common to the users such as:
Audience Characteristics
Information Preferences
Computer Specifications
Web Experience

Taking into account the characteristics of the audience will allow an effective website to be created that will deliver the desired content to the target audience.


Content

Content evaluation and organization requires that the purpose of the website be clearly defined. Collecting a list of the necessary content then organizing it according to the audience's needs is a key step in website planning. In the process of gathering the content being offered, any items that do not support the defined purpose or accomplish target audience objectives should be removed. It is a good idea to test the content and purpose on a focus group and compare the offerings to the audience needs. The next step is to organize the basic information structure by categorizing the content and organizing it according to user needs. Each category should be named with a concise and descriptive title that will become a link on the website. Planning for the site's content ensures that the wants/needs of the target audience and the purpose of the site will be fulfilled.


Compatibility and restrictions

Because of the market share of modern browsers (depending on your target market), the compatibility of your website with the viewers is restricted. For instance, a website that is designed for the majority of websurfers will be limited to the use of valid XHTML 1.0 Strict or older, Cascading Style Sheets Level 1, and 1024x768 display resolution. This is because Internet Explorer is not fully W3C standards compliant with the modularity of XHTML 1.1 and the majority of CSS beyond 1. A target market of more alternative browser (e.g. Firefox and Opera) users allow for more W3C compliance and thus a greater range of options for a web designer.

Another restriction on webpage design is the use of different Image file formats. The majority of users can support GIF, JPEG, and PNG (with restrictions). Again Internet Explorer is the major restriction here, not fully supporting PNG's advanced transparency features, resulting in the GIF format still being the most widely used graphic file format for transparent images.

Many website incompatibilities go unnoticed by the designer and unreported by the users. The only way to be certain a website will work on a particular platform is to test it on that platform.


Planning documentation

Documentation is used to visually plan the site while taking into account the purpose, audience and content, to design the site structure, content and interactions that are most suitable for the website. Documentation may be considered a prototype for the website – a model which allows the website layout to be reviewed, resulting in suggested changes, improvements and/or enhancements. This review process increases the likelihood of success of the website.

First, the content is categorized and the information structure is formulated. The information structure is used to develop a document or visual diagram called a site map. This creates a visual of how the web pages will be interconnected, which helps in deciding what content will be placed on what pages. There are three main ways of diagramming the website structure:
Linear Website Diagrams will allow the users to move in a predetermined sequence;
Hierarchical structures (of Tree Design Website Diagrams) provide more than one path for users to take to their destination;
Branch Design Website Diagrams allow for many interconnections between web pages such as hyperlinks within sentences.

In addition to planning the structure, the layout and interface of individual pages may be planned using a storyboard. In the process of storyboarding, a record is made of the description, purpose and title of each page in the site, and they are linked together according to the most effective and logical diagram type. Depending on the number of pages required for the website, documentation methods may include using pieces of paper and drawing lines to connect them, or creating the storyboard using computer software.

Some or all of the individual pages may be designed in greater detail as a website wireframe, a mock up model or comprehensive layout of what the page will actually look like. This is often done in a graphic program, or layout design program. The wireframe has no working functionality, only planning.

CSS versus tables

For more details on this topic, see Tableless web design.

Back when Netscape Navigator 4 dominated the browser market, the popular solution available for designers to lay out a Web page was by using tables. Often even simple designs for a page would require dozens of tables nested in each other. Many web templates in Dreamweaver and other WYSIWYG editors still use this technique today. Navigator 4 didn't support CSS to a useful degree, so it simply wasn't used.

After the browser wars were over, and Internet Explorer dominated the market, designers started turning toward CSS as an alternate, better means of laying out their pages. CSS proponents say that tables should be used only for tabular data, not for layout. Using CSS instead of tables also returns HTML to a semantic markup, which helps bots and search engines understand what's going on in a web page. Today, all modern Web browsers now support CSS with different degrees of limitations.

However, one of the main points against CSS is that by relying on it exclusively, control is essentially relinquished as each browser has its own quirks which result in a slightly different page display. This is especially a problem as not every browser supports the same subset of CSS rules. For designers who are used to table-based layouts, developing Web sites in CSS often becomes a matter of trying to replicate what can be done with tables, leading some to find CSS design rather cumbersome due to lack of familiarity. For example, at one time it was rather difficult to produce certain design elements, such as vertical positioning, and full-length footers in a design using absolute positions. With the abundance of CSS resources available online today, though, designing with reasonable adherence to standards involves little more than applying CSS 2.1 or CSS 3 to properly structured markup.

These days most modern browsers have solved most of these quirks in CSS rendering and this has made many different CSS layouts possible. However, some people continue to use old browsers, and designers need to keep this in mind, and allow for graceful degrading of pages in older browsers. Most notable among these old browsers are Internet Explorer 5 and 5.5, which, according to some web designers, are becoming the new Netscape Navigator 4 — a block that holds the World Wide Web back from converting to CSS design.

Flash ?

Adobe Flash (formerly Macromedia Flash) is a proprietary, robust graphics animation/application development program used to create and deliver dynamic content, media (such as sound and video), and interactive applications over the web via the browser.

Flash is not a standard produced by a vendor-neutral standards organization like most of the core protocols and formats on the Internet. Flash is much more restrictive than the open HTML format, though, requiring a proprietary plugin to be seen, and it does not integrate with most web browser UI features like the "Back" button unless a hyperlink is programmed to link a new html page from the Flash file, in which case the animation of the previous page would reset. However, those restrictions may be irrelevant depending on the goals of the web site design.

According to NPD study, 98% of US Web users have the Flash Player installed [2], with 45%-56%[3] (depending on region) having the latest version. Numbers vary depending on the detection scheme and research demographics.

Many graphic artists use Flash because it gives them exact control over every part of the design, and anything can be animated and generally "jazzed up". Some application designers enjoy Flash because it lets them create applications that don't have to be refreshed or go to a new web page every time an action occurs. Flash can use embedded fonts instead of the standard fonts installed on most computers. There are many sites which forego HTML entirely for Flash. Other sites may use Flash content combined with HTML as conservatively as gifs or jpegs would be used, but with smaller vector file sizes and the option of faster loading animations. Flash may also be used to protect content from unauthorized duplication or searching.

Flash detractors claim that Flash websites tend to be poorly designed, and often use confusing and non-standard user-interfaces. Up until recently, search engines have been unable to index Flash objects, which has prevented sites from having their contents easily found. This is because many search engine crawlers rely on text to index websites. It is possible to specify alternate content to be displayed for browsers that do not support Flash. Using alternate content also helps search engines to understand the page, and can result in much better visibility for the page. However, the vast majority of Flash websites are not disability accessible (for screen readers, for example) or Section 508 compliant. An additional issue is that sites which commonly use alternate content for search engines to their human visitors are usually judged to be spamming search engines and are automatically banned.

The most recent incarnation of Flash's scripting language (called "actionscript", which is an ECMA language similar to JavaScript) incorporates long-awaited usability features, such as respecting the browser's font size and allowing blind users to use screen readers. Actionscript 2.0 is an Object-Oriented language, allowing the use of CSS, XML, and the design of class-based web applications.

Web site design ?

A Web site is a collection of information about a particular topic or subject. Designing a website is defined as the arrangement and creation of Web pages that in turn make up a website. A Web page consists of information for which the Web site is developed. A website might be compared to a book, where each page of the book is a web page.

There are many aspects (design concerns) in this process, and due to the rapid development of the Internet, new aspects may emerge. For typical commercial Web sites, the basic aspects are:
The site design is defined by the topic and content.
The content, substance, and information on the site should be relevant to the site and should target the area of the public that the website is concerned with.
The site should be user-friendly, with the interface and navigation simple and reliable. If the site is large enough and contains enough information, a site browser may be needed so that information can be found quickly, without using the navigation tools.
The appearance should include a single style that flows throughout, to show consistency. The style should be professional, look good and most of all be relevant to the users and site content.
The visibility of the site's text and information should be paramount as that is what the users are visiting for.
The site must also be easy to find on the internet and if possible should be listed on most, if not all, major search engines.

A Web site typically consists of text and images. The first page of a website is known as the Home page or Index. Some websites use what is commonly called a Splash Page. Splash pages might include a welcome message, language/region selection, or disclaimer. Each web page within a Web site is an HTML file which has its own URL. After each Web page is created, they are typically linked together using a navigation menu composed of hyperlinks. Faster browsing speeds have led to shorter attention spans and more demanding online visitors and this has resulted in less use of Splash Pages, particularly where commercial websites are concerned.

Once a Web site is completed, it must be published or uploaded in order to be viewable to the public over the internet. This may be done using an FTP client. Once published, the Web master may use a variety of techniques to increase the traffic, or hits, that the website receives. This may include submitting the Web site to a search engine such as Google or Yahoo, exchanging links with other Web sites, creating affiliations with similar Web sites, etc.


Multidisciplinary requirements

Web site design crosses multiple disciplines of information systems, information technology and communication design. The website is an information system whose components are sometimes classified as front-end and back-end. The observable content (e.g page layout, user interface, graphics, text, audio) is known as the front-end. The back-end comprises the organization and efficiency of the source code, invisible scripted functions, and the server-side components that process the output from the front-end. Depending on the size of a Web development project, it may be carried out by a multi-skilled individual (sometimes called a web master), or a project manager may oversee collaborative design between group members with specialized skills.

Web design ?

Web design is the designing and graphical presentation of content shown on the Internet in the form of Web sites and other Web applications using many different forms of media. The basic design of most pages on the Web use HTML, CSS, and the newest form of language, XHTML. Many sites today also integrate various forms of dynamic, interactive content using E-Commerce, and server-side languages such as PHP (Hypertext Preprocessor), and ASP. Web design contrasts with Web development, which includes Web server configuration, writing Web applications, and server security.

History

Tim Berners-Lee, the inventor of the World Wide Web, published a website in August 1991.[1] Berners-Lee was the first to combine Internet communication (which had been carrying email and the Usenet for decades) with hypertext(which had also been around for decades, but limited to browsing information stored on a single computer, such as interactive CD-ROM design.

Websites are written in a markup language called HTML, and early versions of HTML was very basic, only giving websites basic structure (headings and paragraphs), and the ability to link using hypertext. This was new and different to existing forms of communication - users could easily navigate to other pages by following hyperlinks from page to page.

As the Web and web design progressed, the markup language used to make it became more complex and flexible, giving the ability to add objects like images and tables to a page. Features like tables, which were originally intended to be used to display tabular information, were soon subverted for use as invisible layout devices. With the advent of Cascading Style Sheets (CSS), table-based layout is increasingly regarded as outdated. Database integration technologies such as server-side scripting and design standards like CSS further changed and enhanced the way the Web is made.

The introduction of Macromedia Flash (now Adobe Flash) into an already interactivity-ready scene has further changed the face of the Web, giving new power to designers and media creators, and offering new interactivity features to users, often at the expense of usability for persons with disabilities, search engine visibility and browser functions available to HTML.

Joomla CMS Tutor