As an Amazon Associate I earn from qualifying purchases

New Chrome security measure aims to curtail an entire class of Web attack

Extreme close-up photograph of finger above Chrome icon on smartphone.

For more than a decade, the Internet has remained vulnerable to a class of attacks that uses browsers as a beachhead for accessing routers and other sensitive devices on a targeted network. Now, Google is finally doing something about it.

Starting in Chrome version 98, the browser will begin relaying requests when public websites want to access endpoints inside the private network of the person visiting the site. For the time being, requests that fail won’t prevent the connections from happening. Instead, they’ll only be logged. Somewhere around Chrome 101—assuming the results of this trial run don’t indicate major parts of the Internet will be broken—it will be mandatory for public sites to have explicit permission before they can access endpoints behind the browser.

The planned deprecation of this access comes as Google enables a new specification known as private network access, which permits public websites to access internal network resources only after the sites have explicitly requested it and the browser grants the request. PNA communications are sent using the CORS, or Cross-Origin Resource Sharing, protocol. Under the scheme, the public site sends a preflight request in the form of the new header Access-Control-Request-Private-Network: true. For the request to be granted, the browser must respond with the corresponding header Access-Control-Allow-Private-Network: true.

Network intrusion via the browser

Up to now, websites have by default had the ability to use Chrome and other browsers as a proxy for accessing resources inside the local network of the person visiting the site. While routers, printers, or other network assets are often locked down, browsers—because of the need for them to interact with so many services—are by default permitted to connect to virtually any resource inside the local network perimeter. This has given rise to a class of attack known as a CSRF, short for cross-site request forgery.

Such attacks have been theorized for more than a decade and have also been carried out in the wild, often with significant consequences. In one 2014 incident, hackers used CSRFs to change the DNS server settings for more than 300,000 wireless routers.

The change caused the compromised routers to use malicious DNS servers to resolve the IP addresses end users were trying to visit. Instead of visiting the authentic site, for instance, the malicious server might return the IP address for a boobytrapped imposter site that the end user has no reason to believe is harmful. The image below, from researchers at Team Cymru, shows the three steps involved in those attacks.

Three phases of an attack that changes a router's DNS settings by exploiting a cross-site request vulnerability in the device's Web interface.
Enlarge / Three phases of an attack that changes a router’s DNS settings by exploiting a cross-site request vulnerability in the device’s Web interface.

Team Cymru

In 2016, people behind the same attack returned to push malware known as DNSChanger. As I explained at the time, the campaign worked against home and office routers made by Netgear, DLink, Comtrend, and Pirelli this way:

DNSChanger uses a set of real-time communications protocols known as webRTC to send so-called STUN server requests used in VoIP communications. The exploit is ultimately able to funnel code through the Chrome browser for Windows and Android to reach the network router. The attack then compares the accessed router against 166 fingerprints of known vulnerable router firmware images.

Assuming the PNA specification goes fully into effect, Chrome will no longer permit such connections unless devices inside the private network explicitly allow it. Here are two diagrams showing how it works.


The road ahead

Starting in version 98, if Chrome detects a private network request, a “preflight request” will be sent ahead of time. If the preflight request fails, the final request will still be sent, but a warning will be surfaced in the DevTools issues panel.

“Any failed preflight request will result in a failed fetch,” Google engineer Titouan Rigoudy and Google developer Eiji Kitamura wrote in a recent blog post. “This can allow you to test whether your website would work after the second phase of our rollout plan. Errors can be diagnosed in the same way as warnings using the DevTools panels mentioned above.”

If and when Google is confident there won’t be mass disruptions, preflight requests will have to be granted to go through.

Source link

We will be happy to hear your thoughts

Leave a reply

Enable registration in settings - general
Compare items
  • Total (0)
Shopping cart