The Economist explains

How does a denial-of-service attack work?

An onslaught of activity attempts to bring down servers

By G.F. | SEATTLE

INTERNET-FREEDOM advocates hope Lu Wei, China’s internet tsar, will indicate today whether the authorities have any knowledge of a raid on GitHub, an American-based website for programmers that also hosts content objectionable to China. Since Thursday hackers have been hijacking web traffic intended for Baidu, the Google of China, and redirecting it to bombard two pages run by GitHub. (Baidu denies involvement.) The targeted pages link to a copy of the Chinese-language edition of the New York Times and to a copy of Greatfire.org, a service that seeks to circumvent China’s “Great Firewall”. The redirection of Baidu's massive traffic flow is seemingly intended to overload the GitHub pages, making them unavailable to other readers. It is a form of what is known as a "denial-of-service" (DoS) attack. Such DoS onslaughts date back to the 1990s. They show no sign of abating, even as techniques to thwart them have improved. The attacks are embarrassing for a government and potentially financially crippling for a business. They can be the work of criminals, who hold sites to ransom or exploit weaknesses by overwhelming the servers, or of hackers operating as sovereign agents, as Russia was accused of perpetrating against Estonia in 2007. But how are such attacks carried out?

A website is technically a "service", a software-based system that responds in a particular way to incoming requests from client softwarein this case a web browser. But a web browser's requests can be easily faked. A web server can only respond efficiently to a certain number of requests for pages, graphics and other website elements at once. Exceed that number, and it bogs down. Go too far, and the system may become entirely unresponsive. Huge floods of traffic, whether legitimate or not, can thus cripple a server. In recent years beefier hardware and better tools to distribute incoming requests among multiple servers have made things more difficult for attackers. DoS attacks once involved a single computer flooding a webserver. When that became ineffective, distributed DoS (DDoS) onslaughts conscripted thousands of virus-infected computers, known as zombies, to bombard the target system with bogus requests from many locations at once. This used to be impossible to block without severing the server's internet link altogether. But now specialised hardware can distinguish between real requests and those intended to harm a site, and block them before they form a tsunami of traffic.

Attackers in turn have also responded with ever-more sophisticated software. Past attacks were akin to making a telephone call and never replying to the answer on the other end, thus tying up the line. Mike Rothman, a researcher at Securosis, a security firm, explains in a white paper that hardware designed to repel such attacks can be bypassed using encrypted connections (HTTPS sessions), which are typically handled directly by the server. Attackers also turn innocent websites and other internet services elsewhere (such as domain-name servers) into unwitting assailants. This involves forging the sending address on queries to these other servers, which obligingly reply to the systems under attack, adding to the load. Attacks can thus be scaled up to well over 100 gigabits per second (Gbps). But Mr Rothman notes that some attackers prefer precision attacks that exploit weaknesses in a specific function, rather than the entire server. For instance, sending huge numbers of legitimate-seeming search requests to a website, each of which uses up substantial computational power, may be more effective and harder to pinpoint than simply flooding it with bogus page requests.

What can be done to protect sites against attack? There is no single answer. Content distribution networks (CDNs) such as Akamai deliver website content of behalf of customers from hundreds or thousands of locations around the world. An attack against these networks is vastly more difficult because of both the size of a CDN and the fact that its servers are dispersed. Some security firms now offer a "scrubbing" service that allows a site under attack to redirect traffic through the security firm's servers, which remove (scrub) the bad requests and send legitimate ones through. And developers of big websites should now be considering Web Applications Firewalls (WAFs), which can be tailored to rebuff unwanted requests intended to overload the site's search, shopping cart or document-uploading features. But history suggests that these new defensive mechanisms will spur attackers to develop more sophisticated tools of their own. Participation in this arms race is, alas, now an unavoidable aspect of doing business on the internet.

This post has been updated to reflect news.

More from The Economist explains

What are the obligations of Israel and Hamas to protect civilians?

International Humanitarian Law creates obligations—but contains numerous caveats

Why is so much of the internet’s infrastructure run by volunteers?

Malware smuggled into XZ Utils software highlights a bigger problem


The growing role of fighting robots on the ground in Ukraine

Drones already fill the skies. Now uncrewed vehicles are heading to the front lines