The origin of the content sent by server is known as: Serving static content is usually much faster (from 2 to 100 times) than serving dynamic content, especially if the latter involves data pulled from a database. Web servers are able to map the path component of a Uniform Resource Locator(URL) into:Common features
Origin of returned content
Path translation
A web hosting service is a type of Internet hosting service that allows individuals and organizations to make their own website accessible via the World Wide Web. Web hosts are companies that provide space on a server they own or lease for use by their clients as well as providing Internet connectivity.
Blog Archive
-
▼
2009
(30)
-
▼
November
(24)
- web servers...
- Overload causesAt any time web servers can be over...
- web hosting
- web hosting Service scope
- web hosting uptime
- types of web hosting
- types of web hosting cont'd...
- Obtaining web hosting
- Firewall
- types of firewall
- function of firewall
- history of firewall
- history of firewall First generation - packet filters
- firewall-other generations
- firewall types
- proxy
- Network address translation
- what is spam ?
- E-Mail Spam
- spam overview
- types
- Spam techniques
- What Is a Yahoo Domain?
- server overview
-
▼
November
(24)
The well-known Internet portal Yahoo! has joined the domain name business. The company now offers domain name registration services through its Yahoo! Small Business division. This sector of Yahoo is devoted to helping entrepreneurs get their businesses up and running by providing assistance like recruiting services, marketing tools and web hosting. The next reasonable step is to offer branded Yahoo domain registration. It’s another tool which will help business establish an online presence. PC Magazine reviewed Yahoo Domains and gave it the highly-coveted Editor’s Choice Award. The published review states that “domain registration through Yahoo! Small Business is the easiest.” The statement reflects the level of service that one receives through a Yahoo Domain. It offers great features like domain forwarding, email forwarding, control panel with DNS management, and the most popular top-level domains. Knowledgeable customer service is available twenty-four hours a day, seven days a week and the call is toll-free. In addition to this, one to five-year registration plans are available. Yahoo! Domains also makes upgrading very easy for those who want to expand to other Yahoo! services. Customers may upgrade to receive a personal custom mailbox or business mail package which reflects the domain name. Users may also transfer their Yahoo! domain from another web host to Yahoo! Web Hosting. If the customer needs help creating an online store, they may summon the help of Merchant Solutions which is a program that will help them manage online sales. PC Magazine said that a Yahoo Domain was the easiest. It is quite possible to claim that the service is one of the cheapest. It offers all of the aforementioned features for only $4.98 per year at the time of this writing. With so many wonderful features at such a low price, it is hard to overlook an offer like this one.
Appending
E-mail appendingIf a marketer has one database containing names, addresses, and telephone numbers of prospective customers, they can pay to have their database matched against an external database containing email addresses. The company then has the means to send email to persons who have not requested email, which may include persons who have deliberately withheld their email address.
Image spam
Image spamImage spam is an obfuscating method in which the text of the message is stored as a JPEG image and displayed in the email. This prevents text based spam filters from detecting and blocking spam messages. Image spam is currently used largely to advertise "pump and dump" stocks.
Often, image spam contains nonsensical, computer-generated text which simply annoys the reader. However, new technology in some programs try to read the images by attempting to find text in these images. They are not very accurate, and sometimes filter out innocent images of products like a box that has words on it.
A newer technique, however, is to use an animated GIF image that does not contain clear text in its initial frame, or to contort the shapes of letters in the image (as in CAPTCHA) to avoid detection by OCR tools.
Blank spam
Blank spam is spam lacking a payload advertisement. Often the message body is missing altogether, as well as the subject line. Still, it fits the definition of spam because of its nature as bulk and unsolicited email.
Blank spam may be originated in different ways, either intentional or unintentionally:
- Blank spam can have been sent in a directory harvest attack, a form of dictionary attack for gathering valid addresses from an email service provider. Since the goal in such an attack is to use the bounces to separate invalid addresses from the valid ones, the spammer may dispense with most elements of the header and the entire message body, and still accomplish his or her goals.
- Blank spam may also occur when a spammer forgets or otherwise fails to add the payload when he or she sets up the spam run.
- Often blank spam headers appear truncated, suggesting that computer glitches may have contributed to this problem—from poorly-written spam software to shoddy relay servers, or any problems that may truncate header lines from the message body.
- Some spam may appear to be blank when in fact it is not. An example of this is the VBS.Davinia.B email worm which propagates through messages that have no subject line and appears blank, when in fact it uses HTML code to download other files.
Backscatter spam
Backscatter (e-mail)Backscatter is a side-effect of e-mail spam, viruses and worms, where email servers receiving spam and other mail send bounce messages to an innocent party. This occurs because the original message's envelope sender is forged to contain the e-mail address of the victim. A very large proportion of such e-mail is sent with a forged From: header, matching the envelope sender.
Since these messages were not solicited by the recipients, are substantially similar to each other, and are delivered in bulk quantities, they qualify as unsolicited bulk email or spam. As such, systems that generate e-mail backscatter can end up being listed on various DNSBLs and be in violation of internet service providers' Terms of Service.
Spam has several definitions, varying by the source.
- Unsolicited bulk e-mail (UBE)—unsolicited e-mail, sent in large quantities.
- Unsolicited commercial e-mail (UCE)—this more restrictive definition is used by regulators whose mandate is to regulate commerce, such as the U.S. Federal Trade Commission.
Spamvertised sites
Many spam e-mails contain URLs to a website or websites. According to a Commtouch report in June 2004, "only five countries are hosting 99.68% of the global spammer websites", of which the foremost is China, hosting 73.58% of all web sites referred to within spam.
Most common products advertised
According to information compiled by Spam-Filter-Review.com, E-mail spam for 2006 can be broken down as follows.
| Products | 25% |
|---|---|
| Financial | 20% |
| Adult | 19% |
| Scams | 9% |
| Health | 7% |
| Internet | 7% |
| Leisure | 6% |
| Spiritual | 4% |
| Other | 3% |
"Pills, porn and poker" sums up the most common products advertised in spam. Others include replica watches.
419 scams
Advance fee fraudAdvance fee fraud spam such as the Nigerian "419" scam may be sent by a single individual from a cyber cafe in a developing country. Organized "spam gangs" operating from Russia or eastern Europe share many features in common with other forms of organized crime, including turf battles and revenge killings.
Phishing
Spam is also a medium for fraudsters to scam users to enter personal information on fake Web sites using e-mail forged to look like it is from a bank or other organization such as PayPal. This is known as phishing. Spear-phishing is targeted phishing, using known information about the recipient, such as making it look like it comes from their employer.From the beginning of the Internet (the ARPANET), sending of junk e-mail has been prohibited, enforced by the Terms of Service/Acceptable Use Policy (ToS/AUP) of internet service providers (ISPs) and peer pressure. Even with a thousand users junk e-mail for advertising is not tenable, and with a million users it is not only impractical, but also expensive. It is estimated that spam cost businesses on the order of $100 billion in 2007. As the scale of the spam problem has grown, ISPs and the public have turned to government for relief from spam, which has failed to materialize.
E-mail spam, also known as junk e-mail, is a subset of spam that involves nearly identical messages sent to numerous recipients by e-mail. A common synonym for spam is unsolicited bulk e-mail (UBE). Definitions of spam usually include the aspects that email is unsolicited and sent in bulk. "UCE" refers specifically to unsolicited commercial e-mail.
E-mail spam has steadily, even exponentially grown since the early 1990s to several billion messages a day. Spam has frustrated, confused, and annoyed e-mail users. The total volume of spam (over 100 billion emails per day as of April 2008) has leveled off slightly in recent years, and is no longer growing exponentially. The amount received by most e-mail users has decreased, mostly because of betterfiltering. About 80% of all spam is sent by fewer than 200 spammers. Botnets, networks of virus-infected computers, are used to send about 80% of spam. Since the cost of the spam is borne mostly by the recipient, it is effectively postage due advertising.
The legal status of spam varies from one jurisdiction to another. In the United States, spam was declared to be legal by the CAN-SPAM Act of 2003 provided the message adheres to certain specifications. ISPs have attempted to recover the cost of spam through lawsuits against spammers, although they have been mostly unsuccessful in collecting damages despite winning in court.
Spammers collect e-mail addresses from chatrooms, websites, customer lists, newsgroups, and viruses which harvest users' address books, and are sold to other spammers. Much of spam is sent to invalid e-mail addresses. Spam averages 78% of all e-mail sent.Spam is flooding the Internet with many copies of the same message, in an attempt to force the message on people who would not otherwise choose to receive it. Most spam is commercial advertising, often for dubious products, get-rich-quick schemes, or quasi-legal services. Spam costs the sender very little to send -- most of the costs are paid for by the recipient or the carriers rather than by the sender.
There are two main types of spam, and they have different effects on Internet users. Cancellable Usenet spam is a single message sent to 20 or more Usenet newsgroups. (Through long experience, Usenet users have found that any message posted to so many newsgroups is often not relevant to most or all of them.) Usenet spam is aimed at "lurkers", people who read newsgroups but rarely or never post and give their address away. Usenet spam robs users of the utility of the newsgroups by overwhelming them with a barrage of advertising or other irrelevant posts. Furthermore, Usenet spam subverts the ability of system administrators and owners to manage the topics they accept on their systems.
Email spam targets individual users with direct mail messages. Email spam lists are often created by scanning Usenet postings, stealing Internet mailing lists, or searching the Web for addresses. Email spams typically cost users money out-of-pocket to receive. Many people - anyone with measured phone service - read or receive their mail while the meter is running, so to speak. Spam costs them additional money. On top of that, it costs money for ISPs and online services to transmit spam, and these costs are transmitted directly to subscribers.
One particularly nasty variant of email spam is sending spam to mailing lists (public or private email discussion forums.) Because many mailing lists limit activity to their subscribers, spammers will use automated tools to subscribe to as many mailing lists as possible, so that they can grab the lists of addresses, or use the mailing list as a direct target for their attacks.
Firewalls often have network address translation (NAT) functionality, and the hosts protected behind a firewall commonly have addresses in the "private address range", as defined in RFC 1918. Firewalls often have such functionality to hide the true address of protected hosts. Originally, the NAT function was developed to address the limited number of IPv4 routable addresses that could be used or assigned to companies or individuals as well as reduce both the amount and therefore cost of obtaining enough public addresses for every computer in an organization. Hiding the addresses of protected devices has become an increasingly important defense against network reconnaissance.
A proxy device (running either on dedicated hardware or as software on a general-purpose machine) may act as a firewall by responding to input packets (connection requests, for example) in the manner of an application, whilst blocking other packets. Proxies make tampering with an internal system from the external network more difficult and misuse of one internal system would not necessarily cause a security breach exploitable from outside the firewall (as long as the application proxy remains intact and properly configured). Conversely, intruders may hijack a publicly-reachable system and use it as a proxy for their own purposes; the proxy then masquerades as that system to other internal machines. While use of internal address spaces enhances security, crackers may still employ methods such as IP spoofing to attempt to pass packets to a target network.
There are several classifications of firewalls depending on where the communication is taking place, where the communication is intercepted and the state that is being traced. Network layer firewalls, also called packet filters, operate at a relatively low level of the TCP/IP protocol stack, not allowing packets to pass through the firewall unless they match the established rule set. The firewall administrator may define the rules; or default rules may apply. The term "packet filter" originated in the context of BSD operating systems. Network layer firewalls generally fall into two sub-categories, stateful and stateless. Stateful firewalls maintain context about active sessions, and use that "state information" to speed packet processing. Any existing network connection can be described by several properties, including source and destination IP address, UDP or TCP ports, and the current stage of the connection's lifetime (including session initiation,handshaking, data transfer, or completion connection). If a packet does not match an existing connection, it will be evaluated according to the ruleset for new connections. If a packet matches an existing connection based on comparison with the firewall's state table, it will be allowed to pass without further processing. Stateless firewalls require less memory, and can be faster for simple filters that require less time to filter than to look up a session. They may also be necessary for filtering stateless network protocols that have no concept of a session. However, they cannot make more complex decisions based on what stage communications between hosts have reached. Modern firewalls can filter traffic based on many packet attributes like source IP address, source port, destination IP address or port, destination service like WWW or FTP. They can filter based on protocols, TTL values, netblock of originator, of the source, and many other attributes. Commonly used packet filters on various versions of Unix are ipf (various), ipfw (FreeBSD/Mac OS X), pf (OpenBSD, and all other BSDs),iptables/ipchains (Linux). Application-layer firewalls work on the application level of the TCP/IP stack (i.e., all browser traffic, or all telnet> or ftp> traffic), and may intercept all packets traveling to or from an application. They block other packets (usually dropping them without acknowledgment to the sender). In principle, application firewalls can prevent all unwanted outside traffic from reaching protected machines. On inspecting all packets for improper content, firewalls can restrict or prevent outright the spread of networked computer worms and trojans. The additional inspection criteria can add extra latency to the forwarding of packets to their destination.Network layer and packet filters
Application-layer
The key benefit of application layer filtering is that it can "understand" certain applications and protocols (such as File Transfer Protocol, DNS, or web browsing), and it can detect whether an unwanted protocol is being sneaked through on a non-standard port or whether a protocol is being abused in any harmful way. From 1989-1990 three colleagues from AT&T Bell Laboratories, Dave Presetto, Janardan Sharma, and Kshitij Nigam developed the third generation of firewalls, calling them circuit level firewalls. Third generation firewalls in addition regard placement of each individual packet within the packet series. This technology is generally referred to as a stateful packet inspection as it maintains records of all connections passing through the firewall and is able to determine whether a packet is either the start of a new connection, a part of an existing connection, or is an invalid packet. Though there is still a set of static rules in such a firewall, the state of a connection can in itself be one of the criteria which trigger specific rules. This type of firewall can help prevent attacks which exploit existing connections, or certain Denial-of-service attacks. In 1992, Bob Braden and Annette DeSchon at the University of Southern California (USC) were refining the concept of a firewall. The product known as "Visas" was the first system to have a visual integration interface with colours and icons, which could be easily implemented to and accessed on a computer operating system such as Microsoft's Windows or Apple's MacOS. In 1994 an Israeli company called Check Point Software Technologies built this into readily available software known as FireWall-1. The existing deep packet inspection functionality of modern firewalls can be shared by Intrusion-prevention systems (IPS). Currently, the Middlebox Communication Working Group of the Internet Engineering Task Force (IETF) is working on standardizing protocols for managing firewalls and other middleboxes. Another axis of development is about integrating identity of users into Firewall rules. Many firewalls provide such features by binding user identities to IP or MAC addresses, which is very approximate and can be easily turned around. The NuFW firewall provides real identity based firewalling, by requesting user's signature for each connection.Second generation - Application layer
Third generation - "stateful" filters
Subsequent developments
The first paper published on firewall technology was in 1988, when engineers from Digital Equipment Corporation (DEC) developed filter systems known as packet filter firewalls. This fairly basic system was the first generation of what would become a highly evolved and technical internet security feature. At AT&T Bell Labs, Bill Cheswick and Steve Bellovin were continuing their research in packet filtering and developed a working model for their own company based upon their original first generation architecture. Packet filters act by inspecting the "packets" which represent the basic unit of data transfer between computers on the Internet. If a packet matches the packet filter's set of rules, the packet filter will drop (silently discard) the packet, or reject it (discard it, and send "error responses" to the source). This type of packet filtering pays no attention to whether a packet is part of an existing stream of traffic (it stores no information on connection "state"). Instead, it filters each packet based only on information contained in the packet itself (most commonly using a combination of the packet's source and destination address, its protocol, and, for TCP and UDP traffic, the port number). TCP and UDP protocols comprise most communication over the Internet, and because TCP and UDP traffic by convention uses well known ports for particular types of traffic, a "stateless" packet filter can distinguish between, and thus control, those types of traffic (such as web browsing, remote printing, email transmission, file transfer), unless the machines on each side of the packet filter are both using the same non-standard ports.First generation - packet filters
The term "firewall" originally meant a wall to confine a fire or potential fire within a building, cf. firewall (construction). Later uses refer to similar structures, such as the metal sheet separating the engine compartment of a vehicle or aircraft from the passenger compartment. Firewall technology emerged in the late 1980s when the Internet was a fairly new technology in terms of its global use and connectivity. The predecessors to firewalls for network security were the routers used in the late 1980s to separate networks from one another. The view of the Internet as a relatively small community of compatible users who valued openness for sharing and collaboration was ended by a number of major internet security breaches which occurred in the late 1980s:
NASA
Ames Research Center
in California sent a memo by
email
to his colleagues that read,“ We are currently under attack from an Internet VIRUS! It has hit Berkeley, UC San Diego, Lawrence Livermore, Stanford, and NASA Ames. ”
A firewall is a dedicated appliance, or software running on a computer, which inspects network traffic passing through it, and denies or permits passage based on a set of rules. It is a software or hardware that is normally placed between a protected network and an unprotected network and acts like a gate to protect assets to ensure that nothing private goes out and nothing malicious comes in. A firewall's basic task is to regulate some of the flow of traffic between computer networks of different trust levels. Typical examples are the Internet which is a zone with no trust and an internal network which is a zone of higher trust. A zone with an intermediate trust level, situated between the Internet and a trusted internal network, is often referred to as a "perimeter network" or Demilitarized zone (DMZ). A firewall's function within a network is similar to physical firewalls with fire doors in building construction. In the former case, it is used to prevent network intrusion to the private network. In the latter case, it is intended to contain and delay structural fire from spreading to adjacent structures. Without proper configuration, a firewall can often become worthless. Standard security practices dictate a "default-deny" firewall ruleset, in which the only network connections which are allowed are the ones that have been explicitly allowed. Unfortunately, such a configuration requires detailed understanding of the network applications and endpoints required for the organization's day-to-day operation. Many businesses lack such understanding, and therefore implement a "default-allow" ruleset, in which all traffic is allowed unless it has been specifically blocked. This configuration makes inadvertent network connections and system compromise much more likely.
There are several types of firewall techniques:
A firewall is a part of a computer system or network that is designed to block unauthorized access while permitting authorized communications. It is a device or set of devices configured to permit, deny, encrypt, decrypt, or proxy all (in and out) computer traffic between different security domains based upon a set of rules and other criteria. Firewalls can be implemented in either hardware or software, or a combination of both. Firewalls are frequently used to prevent unauthorized Internet users from accessing private networks connected to the Internet, especially intranets. All messages entering or leaving the intranet pass through the firewall, which examines each message and blocks those that do not meet the specified security criteria.
Web hosting is often provided as part of a general Internet access plan; there are many free and paid providers offering these services. A customer needs to evaluate the requirements of the application to choose what kind of hosting to use. Such considerations include database server software, scripting software, and operating system. Most hosting providers provide Linux-based web hosting which offers a wide range of different software. A typical configuration for a Linux server is the LAMP platform: Linux, Apache, MySQL, and PHP/Perl/Python. The webhosting client may want to have other services, such as email for their business domain, databases or multi-media services for streaming media. A customer may also choose Windows as the hosting platform. The customer still can choose from PHP, Perl, and Python but may also use ASP .Net or Classic ASP. Web hosting packages often include a Web Content Management System, so the end-user doesn't have to worry about the more technical aspects. These Web Content Management systems are great for the average user, but for those who want more control over their website design, this feature may not be adequate. You can always use any content management system on your servers and modify them at your will. A few good examples include wordpress, Joomla, Drupal and mediawiki. One may also search the Internet to find active webhosting message boards and forums that may provide feedback on what type of webhosting company may suit his/her needs.
- Colocation web hosting service: similar to the dedicated web hosting service, but the user owns the colo server; the hosting company provides physical space that the server takes up and takes care of the server. This is the most powerful and expensive type of the web hosting service. In most cases, the colocation provider may provide little to no support directly for their client's machine, providing only the electrical, Internet access, and storage facilities for the server. In most cases for colo, the client would have his own administrator visit the data center on site to do any hardware upgrades or changes.
- Cloud hosting: is a new type of hosting platform that allows customers powerful, scalable and reliable hosting based on clustered load-balanced servers and utility billing. Removing single-point of failures and allowing customers to pay for only what they use versus what they could use.
- Clustered hosting: having multiple servers hosting the same content for better resource utilization. Clustered Servers are a perfect solution for high-availability dedicated hosting, or creating a scalable web hosting solution. A cluster may separate web serving from database hosting capability.
- Grid hosting: this form of distributed hosting is when a server cluster acts like a grid and is composed of multiple nodes.
- Home server: usually a single machine placed in a private residence can be used to host one or more web sites from a usually consumer-grade broadband connection. These can be purpose-built machines or more commonly old PCs. Some ISPs actively attempt to block home servers by disallowing incoming requests to TCP port 80 of the user's connection and by refusing to provide static IP addresses. A common way to attain a reliable DNS hostname is by creating an account with a dynamic DNS service. A dynamic DNS service will automatically change the IP address that a URL points to when the IP address changes.
Some specific types of hosting provided by web host service providers:
- File hosting service: hosts files, not web pages
- Image hosting service
- Video hosting service
- Blog hosting service
- One-click hosting
- Pastebin Hosts text snippets
- Shopping cart software
Internet hosting services can run Web servers; see Internet hosting services. Hosting services limited to the Web: Many large companies who are not internet service providers also need a computer permanently connected to the web so they can send email, files, etc. to other sites. They may also use the computer as a website host so they can provide details of their goods and services to anyone interested. Additionally these people may decide to place online orders.
Hosting uptime refers to the percentage of time the host is accessible via the internet. Many providers state that they aim for at least 99.9% uptime (roughly equivalent to 45 minutes of downtime a month, or less), but there may be server restarts and planned (or unplanned) maintenance in any hosting environment, which may or may not be considered part of the official uptime promise. Many providers tie uptime and accessibility into their own service level agreement (SLA). SLAs sometimes include refunds or reduced costs if performance goals are not met.
The scope of hosting services varies widely. The most basic is web page and small-scale file hosting, where files can be uploaded via File Transfer Protocol (FTP) or a Web interface. The files are usually delivered to the Web "as is" or with little processing. Many Internet service providers (ISPs) offer this service free to their subscribers. People can also obtain Web page hosting from other, alternative service providers. Personal web site hosting is typically free, advertisement-sponsored, or cheap. Business web site hosting often has a higher expense. Single page hosting is generally sufficient only for personal web pages. A complex site calls for a more comprehensive package that provides database support and application development platforms (e.g.PHP, Java, Ruby on Rails, ColdFusion, and ASP.NET). These facilities allow the customers to write or install scripts for applications like forums and content management. For e-commerce, SSL is also highly recommended. The host may also provide an interface or control panel for managing the Web server and installing scripts as well as other services like e-mail. Some hosts specialize in certain software or services (e.g. e-commerce). They are commonly used by larger companies to outsource network infrastructure to a hosting company.
A web hosting service is a type of Internet hosting service that allows individuals and organizations to make their own website accessible via the World Wide Web. Web hosts are companies that provide space on a server they own or lease for use by their clients as well as providing Internet connectivity, typically in a data center. Web hosts can also provide data center space and connectivity to the Internet for servers they do not own to be located in their data center, called colocation.
At any time web servers can be overloaded because of: The symptoms of an overloaded web server are: To partially overcome above load limits and to prevent overload, most popular web sites use common techniques like:Overload causes
Overload symptoms
Anti-overload techniques
A web server (program) has defined load limits, because it can handle only a limited number of concurrent client connections (usually between 2 and 80,000, by default between 500 and 1,000) per IP address (and TCP port) and it can serve only a certain maximum number of requests per second depending on: When a web server is near to or over its limits, it becomes overloaded and thus unresponsive. A web server can be either implemented into the OS kernel, or in user space(like other regular applications). An in-kernel web server(like TUX on Linux or Microsoft IIS on Windows) will usually work faster because, as part of the system, it can directly use all the hardware resources it needs, such as: Web servers that run in user-mode have to ask the system the permission to use more memory or more CPU resources. Not only these requests to the kernel take time, but they are not always satisfied because the system reserves resources for its own usage and has the responsibility to share hardware resources with all the other running applications. Also applications cannot access the system internal buffers, which is causing useless buffer copies that create another handicap for user-mode web servers. As a consequence, the only way for a user-mode web server to match kernel-mode performances is to raise the quality of its code to much higher standards than the code used into another web server that runs in the kernel. This is more difficult under Windows than under Linux where the user-mode overhead is 6 times smaller than under Windows.Load limits
Kernel-mode and user-mode web servers