The pitfalls of allowing file uploads on your website

These days a lot of websites allow users to upload files, but many don’t know about the unknown pitfalls of letting users (potential attackers) upload files, even valid files.

What’s a valid file? Usually, a restriction would be on two parameters:

  • The uploaded file extension
  • The uploaded Content-Type.

For example, the web application could check that the extension is “jpg” and the Content-Type “image/jpeg” to make sure it’s impossible to upload malicious files. Right?

The problem is that plugins like Flash doesn’t care about extension and Content-Type. If a file is embedded using an <object> tag, it will be executed as a Flash file as long as the content of the file looks like a valid Flash file.

But wait a minute! Shouldn’t the Flash be executed within the domain that embeds the file using the <object> tag? Yes and no. If a Flash file (bogus image file) is uploaded on and then embedded at, the Flash file can execute JavaScript within the domain of However, if the Flash file sends requests, it will be allowed to read files within the domain of

This basically means that if a website allows file uploads without validating the content of the file, an attacker can bypass any CSRF protection on the website.

The attack

Based on these facts we can create an attack scenario like this:

  1. An attacker creates a malicious Flash (SWF) file
  2. The attacker changes the file extension to JPG
  3. The attacker uploads the file to
  4. The attacker embeds the file on using an tag with type “application/x-shockwave-flash”
  5. The victim visits, loads the file as embedded with the tag
  6. The attacker can now send and receive arbitrary requests to using the victims session
  7. The attacker sends a request to and extracts the CSRF token from the response

A payload could look like this:

<object style="height:1px;width:1px;" data="" type="application/x-shockwave-flash" allowscriptaccess="always" flashvars="c=read&u="></object>

The fix

The good news is that there’s a fairly easy way to prevent Flash from doing this. Flash won’t execute the file if it sends a Content-Disposition header like so:

Content-Disposition: attachment; filename=”image.jpg

So if you allow file uploads or printing arbitrary user data in your service, you should always verify the contents as well as sending a Content-Disposition header where applicable.

Another way to remediate issues like this is to host the uploaded files on a separate domain (like

Other uses

But the fun doesn’t stop at file uploads! Since the only requirements of this attack is that an attacker can control the data on a location of the target domain (regardless of Content-Type), there’s more than one way to perform this attack.

One way would be to abuse a JSONP API. Usually, the attacker can control the output of a JSONP API endpoint by changing the callback parameter. However, if an attacker uses an entire Flash file as callback, we can use it just like we would use an uploaded file in this attack. A payload could look like this:

<object style="height:1px;width:1px;" data="" type="application/x-shockwave-flash" allowscriptaccess="always" flashvars="c=alert&u="></object>

tl;dr: Send Content-Disposition headers for uploaded files and validate your JSONP callback names. Or put the uploaded files on a separate domain

And like always, if you want to know if your website has issues like these, try a Detectify scan!

That’s it for now.

Written by: Mathias, Frans


How we got read access on Google’s production servers

To stay on top on the latest security alerts we often spend time on bug bounties and CTF’s. When we were discussing the challenge for the weekend, Mathias got an interesting idea: What target can we use against itself?

Of course. The Google search engine!

What would be better than to scan Google for bugs other than by using the search engine itself? What kind of software tend to contain the most vulnerabilities?

  • Old and deprecated software
  • Unknown and hardly accessible software
  • Proprietary software that only a few people have access to
  • Alpha/Beta releases and otherwise new technologies (software in early stages of it’s lifetime)

For you bounty hunters, here’s a tip: Google Dork

By combining one thing with another, we started Google dorking for acquisitions and products to antique systems without any noticeable amount of users.

One system caught our eyes. The Google Toolbar button gallery. We looked at each other and jokingly said “this looks vuln!”, not knowing how right we were.

Not two minutes later we noticed that the gallery provides users with the ability to customize their toolbar with new buttons. If you’re a developer, you’re also able to create your own buttons by uploading XML files containing various meta data (styling and such).

Fredrik read through the API specifications, and crafted his own button containing fishy XML entities. The plan was to conduct an XXE attack as he noticed the title and description fields were printed out when searching for the buttons.

The root cause of XXE vulnerabilities are naive XML parsers that blindly interpret the DTD of the user supplied XML documents. By doing so, you risk having your parser doing a bunch of nasty things. Some issues include: local file access, SSRF and remote file includes, Denial of Service and possible remote code execution. If you want to know how to patch these issues, check out the OWASP page on how to secure XML parsers in various languages and platforms.

Never the less. The file got uploaded… and behold! First try: /etc/passwd

Second try (for verification purposes): /etc/hosts

Boom goes the dynamite.

What you see here is the /etc/passwd and the /etc/hosts of one of Google’s production servers. Our payloads served as a proof of concept to prove the impact. We could just as well have tried to access any other file on their server, or moved on to SSRF exploitation in order to access internal systems. Too say the least, that’s pretty bad.

We contacted Google straight away while popping open some celebration beers. After 20 minutes we got a reply from Thai on the Google Security Team. They were impressed. We exchanged a few emails on the details back and forth during the coming days. In our correspondence we asked how much the vulnerability was worth. This is what we received as reply: XXE Meme

The bottles (or whatever it is that falls out), turned out to be worth $10.000, enough to cover a road trip through Europe.

tl;dr: We uploaded a malicious XML to one of Google’s servers. Turned out to be a major XXE issue. Google financed an awesome road trip for the team.

Thanks for reading.

Written by: Fredrik
Co-Author: Mathias

If Google can get hacked, are you sure your service is secure? Try Detectify here and see for yourself.


Chrome XSS Protection Bypass (using Rails)

What is the Chrome XSS protection?
The Chrome XSS Protection (also known as XSS auditor) checks whether a script that’s about to run on a web page is also present in the request that fetched that web page. If the script is present in the request, that’s a strong indication that the web server might have been tricked into reflecting the script. So in short, it blocks reflected XSS attacks.

A couple of months ago I discovered that the Chrome XSS Protection could be bypassed in Rails. Later, when I saw the issue brought up on twitter by homakov, I figured I’d write something about it as well. Here’s how the testing went down:

First try
First off we started with creating a dummyscript with a straight forward XSS scenario. Here’s the code:

<h1>Variable: <%= raw params[:variable] %></h1>

Let’s test it with a basic cross-site scripting payload:

GET /?variable=<script>alert(1)</script> HTTP/1.1
<h1>Variable: <script>alert(1)</script></h1>

Oh, no! The XSS auditor blocked the attempt. Let’s try something a bit different!

The bypass

GET /?variable[<script>]=*alert(1)</script> HTTP/1.1
<h1>Variable: {"<script>"=>"*alert(1)</script>"}</h1>

It works! But why? Let’s have a closer look at the source code strip away everything except the <script> tag:

<script>"=>" * alert(1)</script>

Okay, so a bit more clear. The asterix will work as a multiplication of whatever’s on either side. So, the javascript will try to multiplicate the string “=>” with whatever the function alert() produces. Since alert doesn’t return anything, the righthand value will be “undefined” and the final result of the calculation will be “NaN”.

The XSS auditor probably misses this because Rails doesn’t print exactly what the browser sent, making it hard to filter automatically. However, Internet Explorers XSS auditor as well as NoScript finds it.

TL;DR: Chrome’s XSS auditor can be bypassed with rails like so: ?variable[<script>]=*alert(1)</script>.

Want to try the worlds (if you want to) cheapest web security scanner? Sign up here.
Written by: Mathias Karlsson


Detectify Responsible Disclosure Program

As of today, researchers can report security issues in Detectify services to earn a spot on our Hall of Fame as well as some cool prizes. The Detectify team has participated in most Responsible Disclosure programs out there and we felt the time is here to have one of our own.

But our service is made for finding web vulnerabilities, how come we need a Disclosure program? Well. Even though our services are based around finding security bugs in web applications, we are not as naive as to think that our own applications are 100% flawless. We take security issues seriously and will respond swiftly to fix verifiable security issues. If you are the first to report a verifiable security issue, we’ll thank you with some cool stuff and a place at our hall of fame page.

So how does the reporting process work? It’s a 5 step process:

  • A researcher sends a mail using the correct template to
  • The researcher will get an automatic response confirming that we have acquired the issue
  • A support case is automatically created
  • The person assigned to the support case responds to the researcher, verifying the issue
  • The issue is patched and the researcher is showered in eternal

What bugs are eligible? Any typical web security bugs such as:

  • Cross-site Scripting
  • Open redirect
  • Cross-site request forgery
  • File inclusion
  • Authentication bypass
  • Server-side code execution

What bugs are NOT eligible? Any typical low impact/too high complexity such as:

  • Missing Cookie flags on non-session cookies or 3rd party cookies
  • Logout CSRF
  • Social engineering
  • Denial of service

So what are you waiting for?

Sign up for Detectify here.



Lately, there has been a lot of buzz around some recent techniques for defeating HTTPS. Some mentionable names are BEAST and CRIME. These are all reverse acronyms, and BREACH is no different, being an abbrievation for Browser Reconnaissance and Exfiltration via Adaptive Compression of Hypertext.

At the ekoparty security conference in 2012, the security researchers Juliano Rizzo and Thai Duong revealed an exploit called CRIME. This exploit allowed potential attackers to “break” SSL and intercept HTTPS traffic. Even though it was quite big, CRIME is no longer much of a threat. Firefox, Chrome and IE are among the browsers claiming not to be vulnerable to the exploit.

However, at this year’s Black Hat conference, a new exploit was revealed. SSL BREACH quickly spread to newspapers, magazines and blogs, employing the impressive slogan: “SSL, gone in 30 seconds” BREACH was developed by Angelo Prado, Neal Harris and Yoel Gluck. It has been regarded as CRIME’s successor, and that is not so strange when you look at their similarities.

Conceptually, they have a lot in common. Both attacks exploit the compression scheme. Moreover, they both focus on size differences in the ciphertext to get valuable hints. These are fundamental similarities, but there are also fundamental differences.

CRIME uses compression at the TLS/SSL level as the vulnerable part. SSL/TLS-level compression was not very common before the exploit was revealed, so it might have seemed more critical than it actually was. BREACH, however, is targetting HTTP compression, which is very common. When talking about HTTP compression, we are usually talking about the DEFLATE algorithm, upon which many compression mechanisms are based. Some examples of these are ZIP and gzip. The latter is widely used for HTTP compression, and is also vulnerable to BREACH. So, unlike CRIME, the amount of vulnerable users and servers is huge.

One very important detail about BREACH is that you’ll need to be able to passively monitor the traffic you want to attack, but this goes without saying, really. It uses a technique known as an Oracle attack. The DEFLATE algorithm removes strings that appear several times to minimize the size of the payload. They are then replaced by pointers. BREACH picks certain strings and include them in requests sent back to the web application, guessing a character. The cipher text length of this and the original are compared, and if the guess is incorrect, the guess will produce a longer encrypted payload. If not, the guess is correct. This way, the attacker will iterate through the “secret”, character for character until it has been guessed. Then the secret has been extracted, and the attack is finished.

The term “secret” can be somewhat confusing, but it is basically just an encrypted value that you’d want to hide from potential attackers, such as a password or a national identification number.

The folks over at have put together a list of BREACH’s mitigations:

  1. Disabling HTTP compression
  2. Separating secrets from user input
  3. Randomizing secrets per request
  4. Masking secrets (effectively randomizing by XORing with a random secret per request)
  5. Protecting vulnerable pages with CSRF
  6. Length hiding (by adding random number of bytes to the responses)
  7. Rate-limiting the requests

Written by: Håkon Vågsether.

Back to Detectify


Another iOS7 Lock Screen bypass – Control Center turned off

In our previous post we covered a bug released yesterday where anyone can break into a phone that is using iOS7.

We also wrote about some additional ways to trigger the bug, but all versions could be prevented by shutting off the Control Center on Lock Screen. So that’s what we, and everyone else suggested.

However, we have discovered that this does in fact not prevent a similar bug from exploiting the lock screen. The new way is based on the fact that Voice Control/Siri can make phone calls to known contacts, and by using the shutdown screen while calling, the double tap trick can still be done.

And here’s the aftermath: .

Currently we have no suggested patch/fix for this issue.

Written by Mathias/Frans
Back to Detectify blog
Back to Detectify


iOS 7 lock screen bypass write-up

Yesterday a researcher named Jose Rodriguez published a way to bypass the lock screen on the new iOS 7. Naturally, we at Detectify checked it out and played a bit with the bug.

• Make sure the camera app is running. This can be achieved by either using the control center, or swiping the bottom right corner on the main lock screen.
• Enter the control center (swipe the bottom center on the main lock screen)
• Open the timer app, in the bottom left corner next to the flashlight app
• Hold down the power button
• Press cancel
• Between the shutdown screen and the timer app, double tap the home button and hold down the second click for around half a second
• Swipe to the camera app

If you did everything correctly, you can now access the gallery and everything inside. This includes sending mail, using twitter and sending text messages as well.

Other ways to trigger the bug
We also discovered that this bug can be triggered in some other ways. For once, the bug can be triggered not only in the timer app, but also in the calculator app. Another way is to use Siri/Voice control instead of the shut down screen. Then you could apply the same “magic double tap” between Siri/Voice control and the calculator app.


Temporary fix
So what could we do to protect ourselves until Apple releases a patch for this bug? You can shut off access to the control center while locked if you go to Settings -> Control Center -> Access on Lock Screen Read more

Written by: Mathias
Back to Detectify Blog
Back to Detectify


Server-side Javascript Injections and more!

Today’s updates fills the needs of many of you out there! You asked for it, and now it’s in the Detectify engine! Here’s a breakdown on the stuff we’ve put in:

Verify your domain with Google Analytics

Having trouble editing your code? Don’t want to upload files? No problem! You can now verify the ownership of your domain using your Google Analytics account. Try it out in the dashboard or in the sign up!

National Vulnerability Database

Our fingerprinting has been extended using the U.S national vulnerability database. Detectify will now try known vulnerabilities based on the versions we fingerprint from your domain. We will then warn you when there’s a security issue in the version you’re using.

Server-Side JavaScript Injections

We have now a feature for pentesting server side JavaScript. That means, we are able to find NoSQL injections in MongoDB, code execution flaws in Node.JS and other flaws in exotic server-side JS-technologies.


Our site at now sends a Content-Security-Policy header. Content-Security-Policy is a header for security that allows website owners to declare from what sources the users may load content from. Read more about Content Security Policy here.

Download report

Yes, we finally added support for downloading your reports as PDF files, conveniently making them accessible offline. The design of the PDF reports is still in progress and if you have any issues/suggestions, feel free to mail us about it at!

Sign up


HTTP Strict Transport Security (HSTS)

HTTP Strict Transport Security, or just HSTS, is a security mechanism for websites and browsers. HSTS is used when web servers want to tell its clients that they should only use HTTPS, and not HTTP.

This mechanism is useful, because loads and loads of websites have a lazy encryption discipline, and while most of the website is loaded with HTTPS, some resources, such as JavaScript, plugin resources and images are loaded with plain HTTP. This is horrendous. Using HTTPS is nearly pointless if there are resources on the website that are loaded with plain HTTP, since the session cookies will be sent with every request, including the plain HTTP ones. This may lead to session hijacking (unless, of course, the cookies have been flagged as “Secure”).

HSTS is initiated by the server in its Strict-Transport-Security header, which also specifies how long it will take before the HSTS header expires. After this, the website is considered non-HSTS by the browser unless another HSTS header is received.

If, for example, Alice had been using Firefox or Chrome, which both have a preloaded list of websites employing HSTS, she would have been a lot better off. (Given that the website was using HSTS and was mentioned in the preloaded HSTS list.) Everything would’ve been transferred through HTTPS. This is a nice way to prevent SSL stripping attacks. SSL stripping is a method which enables the attacker to conduct Man-In-The-Middle-Attacks unencrypted by moving the SSL “end point” from the user (Alice) to the man in the middle (Eve), while the traffic between Eve and the web server is encrypted. This way, the web server thinks everything is okay, and Alice has no idea that the traffic is supposed to be encrypted unless the web server uses HSTS. If HSTS is used, Alice’s browser knows that the traffic is supposed to be encrypted, and can therefore determine that something fishy is going on when it isn’t.

If you feel like going more in-depth, you may read RFC6797, which can be found at the IETF website or if you wish to integrate HSTS on your website, then check out this OWASP page regarding the subject.

Do you care about security? Try detectify!

Written by: Håkon Vågsether.


Performance boost, new verification methods and custom paths

Today’s update is a big one! Mostly behind the scenes updates but you are going to notice some things that we hope you’ll appreciate!

Performance boost

We have implemented a new HTML parser which will make the scans much faster. For bigger sites, even hours faster, we’ve seen scan time go from 14 hours down to 2.

Not only are we scanning faster, the new HTML parser is easier to work with which will make our development easier and faster. This will allow us to keep our scanner up to date even better.

New verification methods

People have asked us to add more ways to verify the ownership of their website that doesn’t involve the web server or code. For that we have added CNAME and TXT verification, which is just DNS settings.

Add a CNAME for a subdomain and add the right target or add a TXT record and you are good to go!

Custom paths

If you have a hidden path robots can’t find (maybe an admin panel on a hidden location?) you can now tell our scanner where it is.

We have also added a blacklist for paths you don’t want us to scan.

Other than that, we now have a new front page, be sure to check it out!

Let us know what you think!