9:56 AM

what is web hosting ?




Web Hosting or 'Hosting' is a service provided by a vendor which offers a physical location for the storage of web pages and files. Think of a Web Hosting Company as a type of landlord, they rent physical space on their servers allowing webpages to be viewed on the Internet.


What is a Web Server?

Generally used in reference to the computer hardware that provides World Wide Web services on the Internet, a Web server includes the hardware, operating system, server software, TCP/IP protocols and the Web site content. Web servers process requests from Browsers for web pages and serves them up via HTTP.


What is HTTP?

HyperText Transfer Protocol - the underlying protocol used by the World Wide Web. HTTP defines how messages are formatted and transmitted, and what action Web servers and browsers should take in response to various commands. For example, when you enter a URL in your browser, this actually sends an HTTP command to the Web server directing it to fetch and transmit the requested Web page.

What is a Domain Name?

An addressing construct used for identifying and locating computers on the Internet. Domain names provide a system of easy-to-remember Internet addresses, which can be translated by the Domain Name System (DNS) into the numeric addresses (Internet Protocol (IP) numbers) used by a network. (CoffeeCup.com is a domain name as is Google.com)

What is an IP Address?

Every computer connected to the Internet must have a unique address known as an IP (Internet Protocol) address. The IP address is a numeric address written as a set of four numbers separated by dots, for example 64.149.219.213. The address provides a unique identification of a computer and the network it belongs to


What does URL stand for?

Uniform Resource Locator - the global address of documents and other resources on the World Wide Web. The first part of the address indicates what protocol to use, and the second part specifies the IP address or the domain name where the resource is located.


What does DNS stand for?

Domain Name System - a system of mapping names to IP addresses. Because domain names are alphabetic, they're easier for humans to remember. The Internet, however, is really based on IP addresses. Every time you use a domain name, DNS translates the name into the corresponding IP address. It is similar to a phonebook for the Internet

What does FTP stand for?

File Transfer Protocol - Allows the transfer of one or more files from one computer to another across the Internet. Usually from a personal computer to a Server or vice versa.

What is Uploading?

Uploading - Is the transferring of files from your local computer to a remote computer, usually a server.

What is Downloading

Downloading - Is the transferring of files from a remote computer to your local computer.

What is E-Mail

As most people already know E-mail stands for Electronic Mail and is now an integral part of business and personal communication

What are POP and SMTP servers?

Post Office Protocol is the most common protocol used to retrieve e-mail from a mail server. Most e-mail applications (sometimes called an e-mail client) use the POP protocol, although some can use the newer IMAP (Internet Message Access Protocol). The newest version, POP3, can be used with or without SMTP (an e-mail sending protocol, stands for Simple Mail Transfer Protocol). IMAP servers are similar to POP servers, the only difference being they save the e-mail so they can be retrieved from multiple locations or multiple users

What is WebMail?

WebMail - Provides the user an interface on the Internet so they can access their e-mail messages from any computer.

What is a CGI Service

CGI stands for Common Gateway Interface. CGI provides a method to interface a computer program with an HTML page. CGI programs can be written to do many different things, which includes: counting visitors to your web site; processing data obtained from online forms; and creating simple animations. If you want any of these features it is essential that your host includes a CGI Service usually in the form of a CGI-bin.

What is Bandwidth?

Bandwidth in respect to hosting, is the amount of information that can be transferred from the server to a Browser. Hosts usually limit the amount of bandwidth a user has available per month. As an example, if you had a file on your site that was 1mb and you had 1Gb of bandwidth, users could download the file 1000 total times.

What is Disk Space?

Disk Space - the total physical amount of hard drive space a host allows a user to have.

What is a Dedicated Server?

A Dedicated Server is one that only has a single website running on it. Rather than a shared server which has multiple websites being served up.






free counters














9:09 PM

Go Daddy Domains

Go Daddy is an Internet domain registrar and web hosting company that also sells e-business related software and services. In 2009, it reached more than 36 million domain names under management. Go Daddy is currently the largest ICANN-accredited registrar in the world, and is three times the size of its closest competitor. GoDaddy's domain reseller division is Wild West Domains (wildwestdomains.com).


History

Go Daddy was founded in 1997 as Jomax Technologies by Bob Parsons, who previously founded the software development company Parsons Technology, Inc. The company changed its name to Go Daddy in 1999 when a group of employees were brainstorming on a more memorable name than Jomax Technologies. Someone said "How about Big Daddy?" A quick check revealed that it was taken. Then Parsons said "How about Go Daddy?" The name was available, so he bought it. CEO Bob Parsons states the company stuck with the name because it made people smile and remember it.

Go Daddy has grown to become the largest ICANN-accredited registrar on the internet. In 2001, soon after Network Solutions was no longer the only place to register a domain, Go Daddy was approximately the same size as competitors Dotster and eNom. In April 2005 it surpassed Network Solutions in market share in terms of total domain names registered. By 2009, Go Daddy had become 3.5 times the size of eNom and 30 times the size of Dotster.

In 2002, Go Daddy sued VeriSign for domain slamming and again in 2003 over its Site Finder service. This latter suit caused controversy over VeriSign's role as the sole maintainer of the .com and the .net top-level domains. VeriSign shut down Site Finder after receiving a letter from ICANN ordering it to comply with a request to disable the service. In 2006, Go Daddy was sued by Web.com for patent infringement.

In 2007 and 2008, the company lobbied in favor of legislation that would crack down on unscrupulous online pharmacies and child predators.






free counters














7:33 PM

Domain Name Yahoo and Yahoo Domains

Yahoo will register your domains' names for under $10 a year, though as of this writing, Yahoo is having a sale on domains for $2.99 per year. That's a lot cheaper than some registrars I've seen that are still trying to charge $20 or more just for domains. (Yahoo Domains)

Yahoo Domains - Once you get your domain registered, you still need to host it somewhere. Again, Yahoo has a simple answer, or several simple answers, in its Geocities service. You can plunk down your domains for free at Yahoo Geocities, but you'll have ads. For a nominal fee ($4.95 per month) you can get 500 MB storage and 25 GB per month transfer -- that's more than enough for most starter sites.

If you host your domains through Yahoo Geocities, you'll get plenty of tools to help you design and manage your site, enough to do just about anything legal you might want to do on the Net.

You'll have access to Yahoo web page templates as well as a point-and-click designer, in addition to the ability to manually play with your domains' HTML.

Yahoo Domains - Yahoo also gives you choices for uploading, using their easy upload manager or the more traditional FTP for large sites. E-mail is part of the package. (

A recent addition to the Yahoo Geocities tools is the ability to start and manage a blog on your domains.

All in all, registering and hosting your domains through Yahoo is an easy solution for a home user or small business.

Yahoo Domains - How do I set my domain up with Yahoo? I mean I am not transfering it so don't suggest that as I am not open to it. I added both IP address and waited 24 hrs each time and when I go to my address/doamin each IP address took me to some forumer page for servers (1 & 15). Not to mention Step 2 never showed up in the Admin CP even after waiting 48hrs. This is a big issue for me because I like editing the HTMl rather then the Board Wrappers; but Invision_Free makes setting up a domain way more faster and simpler. I have had no luck with forumer and all support topics already made for Yahoo domains are not helpfule for 2 reasons: a) No one helped the person b) No one gave them an exact answer to their question (Yahoo Domains)






free counters














1:27 PM

server overview

Common features

  1. Virtual hosting to serve many web sites using one IP address.
  2. Large file support to be able to serve files whose size is greater than 2 GB on 32 bit OS.
  3. Bandwidth throttling to limit the speed of responses in order to not saturate the network and to be able to serve more clients.

Origin of returned content

The origin of the content sent by server is known as:

  • static if it comes from an existing file lying on a file system;
  • dynamic if it is dynamically generated by some other program or script or application programming interface (API) called by the web server.

Serving static content is usually much faster (from 2 to 100 times) than serving dynamic content, especially if the latter involves data pulled from a database.

Path translation

Web servers are able to map the path component of a Uniform Resource Locator(URL) into:

  • a local file system resource (for static requests);
  • an internal or external program name (for dynamic requests).






free counters














12:42 AM

What Is a Yahoo Domain?

The well-known Internet portal Yahoo! has joined the domain name business. The company now offers domain name registration services through its Yahoo! Small Business division. This sector of Yahoo is devoted to helping entrepreneurs get their businesses up and running by providing assistance like recruiting services, marketing tools and web hosting. The next reasonable step is to offer branded Yahoo domain registration. It’s another tool which will help business establish an online presence.

PC Magazine reviewed Yahoo Domains and gave it the highly-coveted Editor’s Choice Award. The published review states that “domain registration through Yahoo! Small Business is the easiest.” The statement reflects the level of service that one receives through a Yahoo Domain. It offers great features like domain forwarding, email forwarding, control panel with DNS management, and the most popular top-level domains. Knowledgeable customer service is available twenty-four hours a day, seven days a week and the call is toll-free. In addition to this, one to five-year registration plans are available.

Yahoo! Domains also makes upgrading very easy for those who want to expand to other Yahoo! services. Customers may upgrade to receive a personal custom mailbox or business mail package which reflects the domain name. Users may also transfer their Yahoo! domain from another web host to Yahoo! Web Hosting. If the customer needs help creating an online store, they may summon the help of Merchant Solutions which is a program that will help them manage online sales.

PC Magazine said that a Yahoo Domain was the easiest. It is quite possible to claim that the service is one of the cheapest. It offers all of the aforementioned features for only $4.98 per year at the time of this writing. With so many wonderful features at such a low price, it is hard to overlook an offer like this one.






free counters














8:39 PM

Spam techniques

Appending

E-mail appending

If a marketer has one database containing names, addresses, and telephone numbers of prospective customers, they can pay to have their database matched against an external database containing email addresses. The company then has the means to send email to persons who have not requested email, which may include persons who have deliberately withheld their email address.

Image spam

Image spam

Image spam is an obfuscating method in which the text of the message is stored as a JPEG image and displayed in the email. This prevents text based spam filters from detecting and blocking spam messages. Image spam is currently used largely to advertise "pump and dump" stocks.

Often, image spam contains nonsensical, computer-generated text which simply annoys the reader. However, new technology in some programs try to read the images by attempting to find text in these images. They are not very accurate, and sometimes filter out innocent images of products like a box that has words on it.

A newer technique, however, is to use an animated GIF image that does not contain clear text in its initial frame, or to contort the shapes of letters in the image (as in CAPTCHA) to avoid detection by OCR tools.

Blank spam

Blank spam is spam lacking a payload advertisement. Often the message body is missing altogether, as well as the subject line. Still, it fits the definition of spam because of its nature as bulk and unsolicited email.

Blank spam may be originated in different ways, either intentional or unintentionally:

  1. Blank spam can have been sent in a directory harvest attack, a form of dictionary attack for gathering valid addresses from an email service provider. Since the goal in such an attack is to use the bounces to separate invalid addresses from the valid ones, the spammer may dispense with most elements of the header and the entire message body, and still accomplish his or her goals.
  2. Blank spam may also occur when a spammer forgets or otherwise fails to add the payload when he or she sets up the spam run.
  3. Often blank spam headers appear truncated, suggesting that computer glitches may have contributed to this problem—from poorly-written spam software to shoddy relay servers, or any problems that may truncate header lines from the message body.
  4. Some spam may appear to be blank when in fact it is not. An example of this is the VBS.Davinia.B email worm which propagates through messages that have no subject line and appears blank, when in fact it uses HTML code to download other files.

Backscatter spam

Backscatter (e-mail)

Backscatter is a side-effect of e-mail spam, viruses and worms, where email servers receiving spam and other mail send bounce messages to an innocent party. This occurs because the original message's envelope sender is forged to contain the e-mail address of the victim. A very large proportion of such e-mail is sent with a forged From: header, matching the envelope sender.

Since these messages were not solicited by the recipients, are substantially similar to each other, and are delivered in bulk quantities, they qualify as unsolicited bulk email or spam. As such, systems that generate e-mail backscatter can end up being listed on various DNSBLs and be in violation of internet service providers' Terms of Service.






free counters














8:31 PM

types

Spam has several definitions, varying by the source.

  • Unsolicited bulk e-mail (UBE)—unsolicited e-mail, sent in large quantities.
  • Unsolicited commercial e-mail (UCE)—this more restrictive definition is used by regulators whose mandate is to regulate commerce, such as the U.S. Federal Trade Commission.

Spamvertised sites

Many spam e-mails contain URLs to a website or websites. According to a Commtouch report in June 2004, "only five countries are hosting 99.68% of the global spammer websites", of which the foremost is China, hosting 73.58% of all web sites referred to within spam.

Most common products advertised

According to information compiled by Spam-Filter-Review.com, E-mail spam for 2006 can be broken down as follows.

E-Mail Spam by Category
Products 25%
Financial 20%
Adult 19%
Scams 9%
Health 7%
Internet 7%
Leisure 6%
Spiritual 4%
Other 3%

"Pills, porn and poker" sums up the most common products advertised in spam. Others include replica watches.

419 scams

Advance fee fraud

Advance fee fraud spam such as the Nigerian "419" scam may be sent by a single individual from a cyber cafe in a developing country. Organized "spam gangs" operating from Russia or eastern Europe share many features in common with other forms of organized crime, including turf battles and revenge killings.

Phishing

Spam is also a medium for fraudsters to scam users to enter personal information on fake Web sites using e-mail forged to look like it is from a bank or other organization such as PayPal. This is known as phishing. Spear-phishing is targeted phishing, using known information about the recipient, such as making it look like it comes from their employer.






free counters














8:26 PM

spam overview

From the beginning of the Internet (the ARPANET), sending of junk e-mail has been prohibited, enforced by the Terms of Service/Acceptable Use Policy (ToS/AUP) of internet service providers (ISPs) and peer pressure. Even with a thousand users junk e-mail for advertising is not tenable, and with a million users it is not only impractical, but also expensive. It is estimated that spam cost businesses on the order of $100 billion in 2007. As the scale of the spam problem has grown, ISPs and the public have turned to government for relief from spam, which has failed to materialize.






free counters














8:19 PM

E-Mail Spam

E-mail spam, also known as junk e-mail, is a subset of spam that involves nearly identical messages sent to numerous recipients by e-mail. A common synonym for spam is unsolicited bulk e-mail (UBE). Definitions of spam usually include the aspects that email is unsolicited and sent in bulk. "UCE" refers specifically to unsolicited commercial e-mail.

E-mail spam has steadily, even exponentially grown since the early 1990s to several billion messages a day. Spam has frustrated, confused, and annoyed e-mail users. The total volume of spam (over 100 billion emails per day as of April 2008) has leveled off slightly in recent years, and is no longer growing exponentially. The amount received by most e-mail users has decreased, mostly because of betterfiltering. About 80% of all spam is sent by fewer than 200 spammers. Botnets, networks of virus-infected computers, are used to send about 80% of spam. Since the cost of the spam is borne mostly by the recipient, it is effectively postage due advertising.

The legal status of spam varies from one jurisdiction to another. In the United States, spam was declared to be legal by the CAN-SPAM Act of 2003 provided the message adheres to certain specifications. ISPs have attempted to recover the cost of spam through lawsuits against spammers, although they have been mostly unsuccessful in collecting damages despite winning in court.

Spammers collect e-mail addresses from chatrooms, websites, customer lists, newsgroups, and viruses which harvest users' address books, and are sold to other spammers. Much of spam is sent to invalid e-mail addresses. Spam averages 78% of all e-mail sent.






free counters














8:14 PM

what is spam ?

Spam is flooding the Internet with many copies of the same message, in an attempt to force the message on people who would not otherwise choose to receive it. Most spam is commercial advertising, often for dubious products, get-rich-quick schemes, or quasi-legal services. Spam costs the sender very little to send -- most of the costs are paid for by the recipient or the carriers rather than by the sender.

There are two main types of spam, and they have different effects on Internet users. Cancellable Usenet spam is a single message sent to 20 or more Usenet newsgroups. (Through long experience, Usenet users have found that any message posted to so many newsgroups is often not relevant to most or all of them.) Usenet spam is aimed at "lurkers", people who read newsgroups but rarely or never post and give their address away. Usenet spam robs users of the utility of the newsgroups by overwhelming them with a barrage of advertising or other irrelevant posts. Furthermore, Usenet spam subverts the ability of system administrators and owners to manage the topics they accept on their systems.

Email spam targets individual users with direct mail messages. Email spam lists are often created by scanning Usenet postings, stealing Internet mailing lists, or searching the Web for addresses. Email spams typically cost users money out-of-pocket to receive. Many people - anyone with measured phone service - read or receive their mail while the meter is running, so to speak. Spam costs them additional money. On top of that, it costs money for ISPs and online services to transmit spam, and these costs are transmitted directly to subscribers.

One particularly nasty variant of email spam is sending spam to mailing lists (public or private email discussion forums.) Because many mailing lists limit activity to their subscribers, spammers will use automated tools to subscribe to as many mailing lists as possible, so that they can grab the lists of addresses, or use the mailing list as a direct target for their attacks.






free counters














12:48 PM

Network address translation

Firewalls often have network address translation (NAT) functionality, and the hosts protected behind a firewall commonly have addresses in the "private address range", as defined in RFC 1918. Firewalls often have such functionality to hide the true address of protected hosts. Originally, the NAT function was developed to address the limited number of IPv4 routable addresses that could be used or assigned to companies or individuals as well as reduce both the amount and therefore cost of obtaining enough public addresses for every computer in an organization. Hiding the addresses of protected devices has become an increasingly important defense against network reconnaissance.






free counters














12:46 PM

proxy

A proxy device (running either on dedicated hardware or as software on a general-purpose machine) may act as a firewall by responding to input packets (connection requests, for example) in the manner of an application, whilst blocking other packets.

Proxies make tampering with an internal system from the external network more difficult and misuse of one internal system would not necessarily cause a security breach exploitable from outside the firewall (as long as the application proxy remains intact and properly configured). Conversely, intruders may hijack a publicly-reachable system and use it as a proxy for their own purposes; the proxy then masquerades as that system to other internal machines. While use of internal address spaces enhances security, crackers may still employ methods such as IP spoofing to attempt to pass packets to a target network.






free counters














12:36 PM

firewall types

There are several classifications of firewalls depending on where the communication is taking place, where the communication is intercepted and the state that is being traced.

Network layer and packet filters

Network layer firewalls, also called packet filters, operate at a relatively low level of the TCP/IP protocol stack, not allowing packets to pass through the firewall unless they match the established rule set. The firewall administrator may define the rules; or default rules may apply. The term "packet filter" originated in the context of BSD operating systems.

Network layer firewalls generally fall into two sub-categories, stateful and stateless. Stateful firewalls maintain context about active sessions, and use that "state information" to speed packet processing. Any existing network connection can be described by several properties, including source and destination IP address, UDP or TCP ports, and the current stage of the connection's lifetime (including session initiation,handshaking, data transfer, or completion connection). If a packet does not match an existing connection, it will be evaluated according to the ruleset for new connections. If a packet matches an existing connection based on comparison with the firewall's state table, it will be allowed to pass without further processing.

Stateless firewalls require less memory, and can be faster for simple filters that require less time to filter than to look up a session. They may also be necessary for filtering stateless network protocols that have no concept of a session. However, they cannot make more complex decisions based on what stage communications between hosts have reached.

Modern firewalls can filter traffic based on many packet attributes like source IP address, source port, destination IP address or port, destination service like WWW or FTP. They can filter based on protocols, TTL values, netblock of originator, of the source, and many other attributes.

Commonly used packet filters on various versions of Unix are ipf (various), ipfw (FreeBSD/Mac OS X), pf (OpenBSD, and all other BSDs),iptables/ipchains (Linux).

Application-layer

Application-layer firewalls work on the application level of the TCP/IP stack (i.e., all browser traffic, or all telnet> or ftp> traffic), and may intercept all packets traveling to or from an application. They block other packets (usually dropping them without acknowledgment to the sender). In principle, application firewalls can prevent all unwanted outside traffic from reaching protected machines.

On inspecting all packets for improper content, firewalls can restrict or prevent outright the spread of networked computer worms and trojans. The additional inspection criteria can add extra latency to the forwarding of packets to their destination.






free counters














12:23 PM

firewall-other generations

Second generation - Application layer

The key benefit of application layer filtering is that it can "understand" certain applications and protocols (such as File Transfer Protocol, DNS, or web browsing), and it can detect whether an unwanted protocol is being sneaked through on a non-standard port or whether a protocol is being abused in any harmful way.

Third generation - "stateful" filters

From 1989-1990 three colleagues from AT&T Bell Laboratories, Dave Presetto, Janardan Sharma, and Kshitij Nigam developed the third generation of firewalls, calling them circuit level firewalls.

Third generation firewalls in addition regard placement of each individual packet within the packet series. This technology is generally referred to as a stateful packet inspection as it maintains records of all connections passing through the firewall and is able to determine whether a packet is either the start of a new connection, a part of an existing connection, or is an invalid packet. Though there is still a set of static rules in such a firewall, the state of a connection can in itself be one of the criteria which trigger specific rules.

This type of firewall can help prevent attacks which exploit existing connections, or certain Denial-of-service attacks.

Subsequent developments

In 1992, Bob Braden and Annette DeSchon at the University of Southern California (USC) were refining the concept of a firewall. The product known as "Visas" was the first system to have a visual integration interface with colours and icons, which could be easily implemented to and accessed on a computer operating system such as Microsoft's Windows or Apple's MacOS. In 1994 an Israeli company called Check Point Software Technologies built this into readily available software known as FireWall-1.

The existing deep packet inspection functionality of modern firewalls can be shared by Intrusion-prevention systems (IPS).

Currently, the Middlebox Communication Working Group of the Internet Engineering Task Force (IETF) is working on standardizing protocols for managing firewalls and other middleboxes.

Another axis of development is about integrating identity of users into Firewall rules. Many firewalls provide such features by binding user identities to IP or MAC addresses, which is very approximate and can be easily turned around. The NuFW firewall provides real identity based firewalling, by requesting user's signature for each connection.






free counters














12:17 PM

history of firewall
First generation - packet filters

First generation - packet filters

The first paper published on firewall technology was in 1988, when engineers from Digital Equipment Corporation (DEC) developed filter systems known as packet filter firewalls. This fairly basic system was the first generation of what would become a highly evolved and technical internet security feature. At AT&T Bell Labs, Bill Cheswick and Steve Bellovin were continuing their research in packet filtering and developed a working model for their own company based upon their original first generation architecture.

Packet filters act by inspecting the "packets" which represent the basic unit of data transfer between computers on the Internet. If a packet matches the packet filter's set of rules, the packet filter will drop (silently discard) the packet, or reject it (discard it, and send "error responses" to the source).

This type of packet filtering pays no attention to whether a packet is part of an existing stream of traffic (it stores no information on connection "state"). Instead, it filters each packet based only on information contained in the packet itself (most commonly using a combination of the packet's source and destination address, its protocol, and, for TCP and UDP traffic, the port number).

TCP and UDP protocols comprise most communication over the Internet, and because TCP and UDP traffic by convention uses well known ports for particular types of traffic, a "stateless" packet filter can distinguish between, and thus control, those types of traffic (such as web browsing, remote printing, email transmission, file transfer), unless the machines on each side of the packet filter are both using the same non-standard ports.






free counters














12:03 PM

history of firewall

The term "firewall" originally meant a wall to confine a fire or potential fire within a building, cf. firewall (construction). Later uses refer to similar structures, such as the metal sheet separating the engine compartment of a vehicle or aircraft from the passenger compartment.

Firewall technology emerged in the late 1980s when the Internet was a fairly new technology in terms of its global use and connectivity. The predecessors to firewalls for network security were the routers used in the late 1980s to separate networks from one another. The view of the Internet as a relatively small community of compatible users who valued openness for sharing and collaboration was ended by a number of major internet security breaches which occurred in the late 1980s:

  • Clifford Stoll's discovery of German spies tampering with his system
  • Bill Cheswick's "Evening with Berferd" 1992 in which he set up a simple electronic jail to observe an attacker
  • In 1988 an employee at the
    NASA

    Ames Research Center
    in California sent a memo by
    email
    to his colleagues that read,
We are currently under attack from an Internet VIRUS! It has hit Berkeley, UC San Diego, Lawrence Livermore, Stanford, and NASA Ames.
  • The Morris Worm spread itself through multiple vulnerabilities in the machines of the time. Although it was not malicious in intent, the Morris Worm was the first large scale attack on Internet security; the online community was neither expecting an attack nor prepared to deal with one.






free counters














11:58 AM

function of firewall

A firewall is a dedicated appliance, or software running on a computer, which inspects network traffic passing through it, and denies or permits passage based on a set of rules.

It is a software or hardware that is normally placed between a protected network and an unprotected network and acts like a gate to protect assets to ensure that nothing private goes out and nothing malicious comes in.

A firewall's basic task is to regulate some of the flow of traffic between computer networks of different trust levels. Typical examples are the Internet which is a zone with no trust and an internal network which is a zone of higher trust. A zone with an intermediate trust level, situated between the Internet and a trusted internal network, is often referred to as a "perimeter network" or Demilitarized zone (DMZ).

A firewall's function within a network is similar to physical firewalls with fire doors in building construction. In the former case, it is used to prevent network intrusion to the private network. In the latter case, it is intended to contain and delay structural fire from spreading to adjacent structures.

Without proper configuration, a firewall can often become worthless. Standard security practices dictate a "default-deny" firewall ruleset, in which the only network connections which are allowed are the ones that have been explicitly allowed. Unfortunately, such a configuration requires detailed understanding of the network applications and endpoints required for the organization's day-to-day operation. Many businesses lack such understanding, and therefore implement a "default-allow" ruleset, in which all traffic is allowed unless it has been specifically blocked. This configuration makes inadvertent network connections and system compromise much more likely.






free counters














11:54 AM

types of firewall

There are several types of firewall techniques:

  1. Packet filter: Packet filtering inspects each packet passing through the network and accepts or rejects it based on user-defined rules. Although difficult to configure, it is fairly effective and mostly transparent to its users. In addition, it is susceptible to IP spoofing.
  2. Application gateway: Applies security mechanisms to specific applications, such as FTP and Telnet servers. This is very effective, but can impose a performance degradation.
  3. Circuit-level gateway: Applies security mechanisms when a TCP or UDP connection is established. Once the connection has been made, packets can flow between the hosts without further checking.
  4. Proxy server: Intercepts all messages entering and leaving the network. The proxy server effectively hides the true network addresses.






free counters














11:51 AM

Firewall

A firewall is a part of a computer system or network that is designed to block unauthorized access while permitting authorized communications. It is a device or set of devices configured to permit, deny, encrypt, decrypt, or proxy all (in and out) computer traffic between different security domains based upon a set of rules and other criteria.

Firewalls can be implemented in either hardware or software, or a combination of both. Firewalls are frequently used to prevent unauthorized Internet users from accessing private networks connected to the Internet, especially intranets. All messages entering or leaving the intranet pass through the firewall, which examines each message and blocks those that do not meet the specified security criteria.






free counters














11:42 AM

Obtaining web hosting

Web hosting is often provided as part of a general Internet access plan; there are many free and paid providers offering these services.

A customer needs to evaluate the requirements of the application to choose what kind of hosting to use. Such considerations include database server software, scripting software, and operating system. Most hosting providers provide Linux-based web hosting which offers a wide range of different software. A typical configuration for a Linux server is the LAMP platform: Linux, Apache, MySQL, and PHP/Perl/Python. The webhosting client may want to have other services, such as email for their business domain, databases or multi-media services for streaming media. A customer may also choose Windows as the hosting platform. The customer still can choose from PHP, Perl, and Python but may also use ASP .Net or Classic ASP.

Web hosting packages often include a Web Content Management System, so the end-user doesn't have to worry about the more technical aspects. These Web Content Management systems are great for the average user, but for those who want more control over their website design, this feature may not be adequate. You can always use any content management system on your servers and modify them at your will. A few good examples include wordpress, Joomla, Drupal and mediawiki.

One may also search the Internet to find active webhosting message boards and forums that may provide feedback on what type of webhosting company may suit his/her needs.






free counters














11:35 AM

types of web hosting cont'd...


  • Colocation web hosting service: similar to the dedicated web hosting service, but the user owns the colo server; the hosting company provides physical space that the server takes up and takes care of the server. This is the most powerful and expensive type of the web hosting service. In most cases, the colocation provider may provide little to no support directly for their client's machine, providing only the electrical, Internet access, and storage facilities for the server. In most cases for colo, the client would have his own administrator visit the data center on site to do any hardware upgrades or changes.
  • Cloud hosting: is a new type of hosting platform that allows customers powerful, scalable and reliable hosting based on clustered load-balanced servers and utility billing. Removing single-point of failures and allowing customers to pay for only what they use versus what they could use.
  • Clustered hosting: having multiple servers hosting the same content for better resource utilization. Clustered Servers are a perfect solution for high-availability dedicated hosting, or creating a scalable web hosting solution. A cluster may separate web serving from database hosting capability.
  • Grid hosting: this form of distributed hosting is when a server cluster acts like a grid and is composed of multiple nodes.
  • Home server: usually a single machine placed in a private residence can be used to host one or more web sites from a usually consumer-grade broadband connection. These can be purpose-built machines or more commonly old PCs. Some ISPs actively attempt to block home servers by disallowing incoming requests to TCP port 80 of the user's connection and by refusing to provide static IP addresses. A common way to attain a reliable DNS hostname is by creating an account with a dynamic DNS service. A dynamic DNS service will automatically change the IP address that a URL points to when the IP address changes.

Some specific types of hosting provided by web host service providers:

  • File hosting service: hosts files, not web pages
  • Image hosting service
  • Video hosting service
  • Blog hosting service
  • One-click hosting
  • Pastebin Hosts text snippets
  • Shopping cart software






free counters














11:30 AM

types of web hosting

Internet hosting services can run Web servers; see Internet hosting services.

Hosting services limited to the Web:

Many large companies who are not internet service providers also need a computer permanently connected to the web so they can send email, files, etc. to other sites. They may also use the computer as a website host so they can provide details of their goods and services to anyone interested. Additionally these people may decide to place online orders.

  • Free web hosting service: Free web hosting is offered by different companies with limited services, sometimes advertisement-supported web hosting, and is often limited when compared to paid hosting.
  • Shared web hosting service: one's website is placed on the same server as many other sites, ranging from a few to hundreds or thousands. Typically, all domains may share a common pool of server resources, such as RAM and the CPU. The features available with this type of service can be quite extensive. A shared website may be hosted with a reseller.
  • Reseller web hosting: allows clients to become web hosts themselves. Resellers could function, for individual domains, under any combination of these listed types of hosting, depending on who they are affiliated with as a provider. Resellers' accounts may vary tremendously in size: they may have their own virtual dedicated server to a collocated server. Many resellers provide a nearly identical service to their provider's shared hosting plan and provide the technical support themselves.
  • Virtual Dedicated Server: also known as a Virtual Private Server (VPS for short) divides server resources into virtual servers, where resources can be allocated in a way that does not directly reflect the underlying hardware. VPS will often be allocated resources based on a one server to many VPSs relationship, however virtualisation may be done for a number of reasons, including the ability to move a VPS container between servers. The users may have root access to their own virtual space. This is also known as a virtual private server or VPS. Customers are sometimes responsible for patching and maintaining the server.
  • Dedicated hosting service: the user gets his or her own Web server and gains full control over it (root access for Linux/administrator access for Windows); however, the user typically does not own the server. Another type of Dedicated hosting is Self-Managed or Unmanaged. This is usually the least expensive for Dedicated plans. The user has full administrative access to the box, which means the client is responsible for the security and maintenance of his own dedicated box.
  • Managed hosting service: the user gets his or her own Web server but is not allowed full control over it (root access for Linux/administrator access for Windows); however, they are allowed to manage their data via FTP or other remote management tools. The user is disallowed full control so that the provider can guarantee quality of service by not allowing the user to modify the server or potentially create configuration problems. The user typically does not own the server. The server is leased to the client.






free counters














11:28 AM

web hosting uptime

Hosting uptime refers to the percentage of time the host is accessible via the internet. Many providers state that they aim for at least 99.9% uptime (roughly equivalent to 45 minutes of downtime a month, or less), but there may be server restarts and planned (or unplanned) maintenance in any hosting environment, which may or may not be considered part of the official uptime promise.

Many providers tie uptime and accessibility into their own service level agreement (SLA). SLAs sometimes include refunds or reduced costs if performance goals are not met.






free counters














11:22 AM

web hosting Service scope

The scope of hosting services varies widely. The most basic is web page and small-scale file hosting, where files can be uploaded via File Transfer Protocol (FTP) or a Web interface. The files are usually delivered to the Web "as is" or with little processing. Many Internet service providers (ISPs) offer this service free to their subscribers. People can also obtain Web page hosting from other, alternative service providers. Personal web site hosting is typically free, advertisement-sponsored, or cheap. Business web site hosting often has a higher expense.

Single page hosting is generally sufficient only for personal web pages. A complex site calls for a more comprehensive package that provides database support and application development platforms (e.g.PHP, Java, Ruby on Rails, ColdFusion, and ASP.NET). These facilities allow the customers to write or install scripts for applications like forums and content management. For e-commerce, SSL is also highly recommended.

The host may also provide an interface or control panel for managing the Web server and installing scripts as well as other services like e-mail. Some hosts specialize in certain software or services (e.g. e-commerce). They are commonly used by larger companies to outsource network infrastructure to a hosting company.






free counters














11:17 AM

web hosting

A web hosting service is a type of Internet hosting service that allows individuals and organizations to make their own website accessible via the World Wide Web. Web hosts are companies that provide space on a server they own or lease for use by their clients as well as providing Internet connectivity, typically in a data center. Web hosts can also provide data center space and connectivity to the Internet for servers they do not own to be located in their data center, called colocation.






free counters














3:43 AM

Overload causes

At any time web servers can be overloaded because of:

  • Too much legitimate web traffic. Thousands or even millions of clients hitting the web site in a short interval of time. (e.g. Slashdot effect);
  • DDoS. Distributed Denial of Service attacks;
  • Computer worms that sometimes cause abnormal traffic because of millions of infected computers (not coordinated among them);
  • XSS viruses can cause high traffic because of millions of infected browsers and/or web servers;
  • Internet web robots. Traffic not filtered/limited on large web sites with very few resources (bandwidth, etc.);
  • Internet (network) slowdowns, so that client requests are served more slowly and the number of connections increases so much that server limits are reached;
  • Web servers (computers) partial unavailability. This can happen because of required or urgent maintenance or upgrade, hardware or software failures, back-end (i.e. DB) failures, etc.; in these cases the remaining web servers get too much traffic and become overloaded.

Overload symptoms

The symptoms of an overloaded web server are:

  • requests are served with (possibly long) delays (from 1 second to a few hundred seconds);
  • 500, 502, 503, 504 HTTP errors are returned to clients (sometimes also unrelated 404 error or even 408 error may be returned);
  • TCP connections are refused or reset (interrupted) before any content is sent to clients;
  • in very rare cases, only partial contents are sent (but this behavior may well be considered a bug, even if it usually depends on unavailable system resources).

Anti-overload techniques

To partially overcome above load limits and to prevent overload, most popular web sites use common techniques like:

  • managing network traffic, by using:
    • Firewalls to block unwanted traffic coming from bad IP sources or having bad patterns;
    • HTTP traffic managers to drop, redirect or rewrite requests having bad HTTP patterns;
    • Bandwidth management and traffic shaping, in order to smooth down peaks in network usage;
  • deploying web cache techniques;
  • using different domain names to serve different (static and dynamic) content by separate Web servers






free counters














3:33 AM

web servers...

Load limits

A web server (program) has defined load limits, because it can handle only a limited number of concurrent client connections (usually between 2 and 80,000, by default between 500 and 1,000) per IP address (and TCP port) and it can serve only a certain maximum number of requests per second depending on:

  • its own settings;
  • the HTTP request type;
  • content origin (static or dynamic);
  • the fact that the served content is or is not cached;
  • the hardware and software limits of the OS where it is working.

When a web server is near to or over its limits, it becomes overloaded and thus unresponsive.

Kernel-mode and user-mode web servers

A web server can be either implemented into the OS kernel, or in user space(like other regular applications).

An in-kernel web server(like TUX on Linux or Microsoft IIS on Windows) will usually work faster because, as part of the system, it can directly use all the hardware resources it needs, such as:

  • non-paged memory;
  • CPU time-slices;
  • network adapters buffers.

Web servers that run in user-mode have to ask the system the permission to use more memory or more CPU resources. Not only these requests to the kernel take time, but they are not always satisfied because the system reserves resources for its own usage and has the responsibility to share hardware resources with all the other running applications.

Also applications cannot access the system internal buffers, which is causing useless buffer copies that create another handicap for user-mode web servers. As a consequence, the only way for a user-mode web server to match kernel-mode performances is to raise the quality of its code to much higher standards than the code used into another web server that runs in the kernel.

This is more difficult under Windows than under Linux where the user-mode overhead is 6 times smaller than under Windows.






free counters














10:21 PM

web servers...

Common features

  1. Virtual hosting to serve many web sites using one IP address.
  2. Large file support to be able to serve files whose size is greater than 2 GB on 32 bit OS.
  3. Bandwidth throttling to limit the speed of responses in order to not saturate the network and to be able to serve more clients.

Origin of returned content

The origin of the content sent by server is known as:

  • static if it comes from an existing file lying on a file system;
  • dynamic if it is dynamically generated by some other program or script or application programming interface (API) called by the web server.

Serving static content is usually much faster (from 2 to 100 times) than serving dynamic content, especially if the latter involves data pulled from a database.

Path translation

Web servers are able to map the path component of a Uniform Resource Locator(URL) into:

  • a local file system resource (for static requests);
  • an internal or external program name (for dynamic requests).






free counters














8:39 PM

web servers

A web server is a computer program that delivers serves content, such as this web page, using the Hypertext Transfer Protocol. The term web server can also refer to the computer or virtual machine running the program.


The primary function of a web server is to deliver web pages HTML documents and associated content (e.g. images, style sheets, JavaScripts) to clients. A client, commonly a web browser or web crawler, makes a request for a specific resource using HTTP and, if all goes well, the server responds with the content of that resource. The resource is typically a real file on the server's secondary memory, but this is not necessarily the case and depends on how the web server is implemented.

While the primary function is to serve content, a full implementation of HTTP also includes a way of receiving content from clients. This feature is used for submitting web forms, including uploading of files.

Many generic web servers also support server-side scripting (e.g. Apache HTTP Server and PHP). This means that a script can be executed by the server when a client requests it. Usually, this functionality is used to create HTML documents on-the-fly as opposed to return fixed documents. This is referred to as dynamic and static content respectively.

Highly niched web servers can be found in devices such as printers and routers in order to ease administration using a familiar user interface in the form of a web page.


History of Web Servers


In 1989 Tim Berners-Lee proposed to his employer CERN (European Organization for Nuclear Research) a new project, which had the goal of easing the exchange of information between scientists by using a hypertext system. As a result of the implementation of this project, in 1990 Berners-Lee wrote two programs:

  • a browser called WorldWideWeb;
  • the world's first web server, later known as CERN httpd, which ran on NeXTSTEP.

Between 1991 and 1994 the simplicity and effectiveness of early technologies used to surf and exchange data through the World Wide Web helped to port them to many different operating systems and spread their use among lots of different social groups of people, first in scientific organizations, then in universities and finally in industry.

In 1994 Tim Berners-Lee decided to constitute the World Wide Web Consortium to regulate the further development of the many technologies involved (HTTP, HTML, etc.) through a standardization process.






free counters














8:31 PM

Web Servers Web Hosting Firewalls Mail spams






free counters