Security testing behind login

Most web applications have areas that are accessible for all visitors while other areas are only accessible for users with an account. Examples on this are users logging in to a webshop or a forum, but it could also be a protected development/pre-production environment.

A user often have access to more functionality when logged in than when not logged in, e.g., to post comments on a forum, upload pictures to their profile, or complete a purchase. Hence, a comprehensive security evaluation of any web application need to be able to test areas behind a login.

Two common methods of login/authentication are Basic auth and HTML forms.

Basic auth

Basic auth is mainly used to protect whole systems from external access e.g. a development environment (click here to see an example of this).

To authenticate Detectify with Basic auth just provide the credentials for the domain under domain settings in the dashboard.

HTML forms

HTML forms are “normal logins” you see on most websites like facebook or

To authenticate Detectify with HTML forms you need to record the login sequence and upload it to the domain under domain setting. The sequence should be recorded with our newly released Chrome plugin. In the plugin there is a simple wizard that will guide your through the process of recording the login process. When the wizard is completed you should save the trail and then upload it to the domain.

Do you feel that something is missing from Detectify or have a general comment? Hit us up at @detectify or We are aiming to improve Detectify and make the Internet a safer place.

Happy scanning!


Major updates to Detectify

We are releasing multiple major changes to Detectify and this is the beginning of the new Detectify. Many hours have been invested in a new and improved UI. There are also multiple changes under the hood in the core of the service, e.g., updated engine to better handle JS-based pages.

New user interface

The ambition with the new UI is to create a flexible design where it is easier for us introduce new functionality to our users. The release plan is packed with features that will help you as a developer and security tester.

We have introduced new features for improved usability, e.g.,

  • Scanning behind login and testing of predefined user flows (e.g., check-out flows)

  • Released the API for you to build integrations into your development tools

Improved coverage of new and updated attack vectors

New and updated modules for vulnerability testing in this release are e.g., CSRF (testing of forms), SSL Breach, Flash content sniffing (Rosetta Flash), DNS SPF (fake the sender of e-mails), DNSSEC tests, CSS parser and for all of you with internal legacy systems, VBS. An update of our JS-engine brings improved coverage of DOM-based XSS.

Set-up recurring testing

Don’t forget to set up recurring scanning of your site to make sure you are always tested for new security issues. New attack vectors are constantly being identified and we release new versions of the scanner frequently.

Do you feel that something is missing from Detectify or have general comment? Hit us up at @detectify or We are aiming to improve Detectify and make the Internet a safer place.

Happy scanning!


The pitfalls of allowing file uploads on your website

These days a lot of websites allow users to upload files, but many don’t know about the unknown pitfalls of letting users (potential attackers) upload files, even valid files.

What’s a valid file? Usually, a restriction would be on two parameters:

  • The uploaded file extension
  • The uploaded Content-Type.

For example, the web application could check that the extension is “jpg” and the Content-Type “image/jpeg” to make sure it’s impossible to upload malicious files. Right?

The problem is that plugins like Flash doesn’t care about extension and Content-Type. If a file is embedded using an <object> tag, it will be executed as a Flash file as long as the content of the file looks like a valid Flash file.

But wait a minute! Shouldn’t the Flash be executed within the domain that embeds the file using the <object> tag? Yes and no. If a Flash file (bogus image file) is uploaded on and then embedded at, the Flash file can execute JavaScript within the domain of However, if the Flash file sends requests, it will be allowed to read files within the domain of

This basically means that if a website allows file uploads without validating the content of the file, an attacker can bypass any CSRF protection on the website.

The attack

Based on these facts we can create an attack scenario like this:

  1. An attacker creates a malicious Flash (SWF) file
  2. The attacker changes the file extension to JPG
  3. The attacker uploads the file to
  4. The attacker embeds the file on using an tag with type “application/x-shockwave-flash”
  5. The victim visits, loads the file as embedded with the tag
  6. The attacker can now send and receive arbitrary requests to using the victims session
  7. The attacker sends a request to and extracts the CSRF token from the response

A payload could look like this:

<object style="height:1px;width:1px;" data="" type="application/x-shockwave-flash" allowscriptaccess="always" flashvars="c=read&u="></object>

The fix

The good news is that there’s a fairly easy way to prevent Flash from doing this. Flash won’t execute the file if it sends a Content-Disposition header like so:

Content-Disposition: attachment; filename=”image.jpg

So if you allow file uploads or printing arbitrary user data in your service, you should always verify the contents as well as sending a Content-Disposition header where applicable.

Another way to remediate issues like this is to host the uploaded files on a separate domain (like

Other uses

But the fun doesn’t stop at file uploads! Since the only requirements of this attack is that an attacker can control the data on a location of the target domain (regardless of Content-Type), there’s more than one way to perform this attack.

One way would be to abuse a JSONP API. Usually, the attacker can control the output of a JSONP API endpoint by changing the callback parameter. However, if an attacker uses an entire Flash file as callback, we can use it just like we would use an uploaded file in this attack. A payload could look like this:

<object style="height:1px;width:1px;" data="" type="application/x-shockwave-flash" allowscriptaccess="always" flashvars="c=alert&u="></object>

tl;dr: Send Content-Disposition headers for uploaded files and validate your JSONP callback names. Or put the uploaded files on a separate domain

And like always, if you want to know if your website has issues like these, try a Detectify scan!

That’s it for now.

Written by: Mathias, Frans


How we got read access on Google’s production servers

To stay on top on the latest security alerts we often spend time on bug bounties and CTF’s. When we were discussing the challenge for the weekend, Mathias got an interesting idea: What target can we use against itself?

Of course. The Google search engine!

What would be better than to scan Google for bugs other than by using the search engine itself? What kind of software tend to contain the most vulnerabilities?

  • Old and deprecated software
  • Unknown and hardly accessible software
  • Proprietary software that only a few people have access to
  • Alpha/Beta releases and otherwise new technologies (software in early stages of it’s lifetime)

For you bounty hunters, here’s a tip: Google Dork

By combining one thing with another, we started Google dorking for acquisitions and products to antique systems without any noticeable amount of users.

One system caught our eyes. The Google Toolbar button gallery. We looked at each other and jokingly said “this looks vuln!”, not knowing how right we were.

Not two minutes later we noticed that the gallery provides users with the ability to customize their toolbar with new buttons. If you’re a developer, you’re also able to create your own buttons by uploading XML files containing various meta data (styling and such).

Fredrik read through the API specifications, and crafted his own button containing fishy XML entities. The plan was to conduct an XXE attack as he noticed the title and description fields were printed out when searching for the buttons.

The root cause of XXE vulnerabilities are naive XML parsers that blindly interpret the DTD of the user supplied XML documents. By doing so, you risk having your parser doing a bunch of nasty things. Some issues include: local file access, SSRF and remote file includes, Denial of Service and possible remote code execution. If you want to know how to patch these issues, check out the OWASP page on how to secure XML parsers in various languages and platforms.

Never the less. The file got uploaded… and behold! First try: /etc/passwd

Second try (for verification purposes): /etc/hosts

Boom goes the dynamite.

What you see here is the /etc/passwd and the /etc/hosts of one of Google’s production servers. Our payloads served as a proof of concept to prove the impact. We could just as well have tried to access any other file on their server, or moved on to SSRF exploitation in order to access internal systems. Too say the least, that’s pretty bad.

We contacted Google straight away while popping open some celebration beers. After 20 minutes we got a reply from Thai on the Google Security Team. They were impressed. We exchanged a few emails on the details back and forth during the coming days. In our correspondence we asked how much the vulnerability was worth. This is what we received as reply: XXE Meme

The bottles (or whatever it is that falls out), turned out to be worth $10.000, enough to cover a road trip through Europe.

tl;dr: We uploaded a malicious XML to one of Google’s servers. Turned out to be a major XXE issue. Google financed an awesome road trip for the team.

Thanks for reading.

Written by: Fredrik
Co-Author: Mathias

If Google can get hacked, are you sure your service is secure? Try Detectify here and see for yourself.


Chrome XSS Protection Bypass (using Rails)

What is the Chrome XSS protection?
The Chrome XSS Protection (also known as XSS auditor) checks whether a script that’s about to run on a web page is also present in the request that fetched that web page. If the script is present in the request, that’s a strong indication that the web server might have been tricked into reflecting the script. So in short, it blocks reflected XSS attacks.

A couple of months ago I discovered that the Chrome XSS Protection could be bypassed in Rails. Later, when I saw the issue brought up on twitter by homakov, I figured I’d write something about it as well. Here’s how the testing went down:

First try
First off we started with creating a dummyscript with a straight forward XSS scenario. Here’s the code:

<h1>Variable: <%= raw params[:variable] %></h1>

Let’s test it with a basic cross-site scripting payload:

GET /?variable=<script>alert(1)</script> HTTP/1.1
<h1>Variable: <script>alert(1)</script></h1>

Oh, no! The XSS auditor blocked the attempt. Let’s try something a bit different!

The bypass

GET /?variable[<script>]=*alert(1)</script> HTTP/1.1
<h1>Variable: {"<script>"=>"*alert(1)</script>"}</h1>

It works! But why? Let’s have a closer look at the source code strip away everything except the <script> tag:

<script>"=>" * alert(1)</script>

Okay, so a bit more clear. The asterix will work as a multiplication of whatever’s on either side. So, the javascript will try to multiplicate the string “=>” with whatever the function alert() produces. Since alert doesn’t return anything, the righthand value will be “undefined” and the final result of the calculation will be “NaN”.

The XSS auditor probably misses this because Rails doesn’t print exactly what the browser sent, making it hard to filter automatically. However, Internet Explorers XSS auditor as well as NoScript finds it.

TL;DR: Chrome’s XSS auditor can be bypassed with rails like so: ?variable[<script>]=*alert(1)</script>.

Want to try the worlds (if you want to) cheapest web security scanner? Sign up here.
Written by: Mathias Karlsson


Detectify Responsible Disclosure Program

As of today, researchers can report security issues in Detectify services to earn a spot on our Hall of Fame as well as some cool prizes. The Detectify team has participated in most Responsible Disclosure programs out there and we felt the time is here to have one of our own.

But our service is made for finding web vulnerabilities, how come we need a Disclosure program? Well. Even though our services are based around finding security bugs in web applications, we are not as naive as to think that our own applications are 100% flawless. We take security issues seriously and will respond swiftly to fix verifiable security issues. If you are the first to report a verifiable security issue, we’ll thank you with some cool stuff and a place at our hall of fame page.

So how does the reporting process work? It’s a 5 step process:

  • A researcher sends a mail using the correct template to
  • The researcher will get an automatic response confirming that we have acquired the issue
  • A support case is automatically created
  • The person assigned to the support case responds to the researcher, verifying the issue
  • The issue is patched and the researcher is showered in eternal

What bugs are eligible? Any typical web security bugs such as:

  • Cross-site Scripting
  • Open redirect
  • Cross-site request forgery
  • File inclusion
  • Authentication bypass
  • Server-side code execution

What bugs are NOT eligible? Any typical low impact/too high complexity such as:

  • Missing Cookie flags on non-session cookies or 3rd party cookies
  • Logout CSRF
  • Social engineering
  • Denial of service

So what are you waiting for?

Sign up for Detectify here.


Another iOS7 Lock Screen bypass – Control Center turned off

In our previous post we covered a bug released yesterday where anyone can break into a phone that is using iOS7.

We also wrote about some additional ways to trigger the bug, but all versions could be prevented by shutting off the Control Center on Lock Screen. So that’s what we, and everyone else suggested.

However, we have discovered that this does in fact not prevent a similar bug from exploiting the lock screen. The new way is based on the fact that Voice Control/Siri can make phone calls to known contacts, and by using the shutdown screen while calling, the double tap trick can still be done.

And here’s the aftermath: .

Currently we have no suggested patch/fix for this issue.

Written by Mathias/Frans
Back to Detectify blog
Back to Detectify


iOS 7 lock screen bypass write-up

Yesterday a researcher named Jose Rodriguez published a way to bypass the lock screen on the new iOS 7. Naturally, we at Detectify checked it out and played a bit with the bug.

• Make sure the camera app is running. This can be achieved by either using the control center, or swiping the bottom right corner on the main lock screen.
• Enter the control center (swipe the bottom center on the main lock screen)
• Open the timer app, in the bottom left corner next to the flashlight app
• Hold down the power button
• Press cancel
• Between the shutdown screen and the timer app, double tap the home button and hold down the second click for around half a second
• Swipe to the camera app

If you did everything correctly, you can now access the gallery and everything inside. This includes sending mail, using twitter and sending text messages as well.

Other ways to trigger the bug
We also discovered that this bug can be triggered in some other ways. For once, the bug can be triggered not only in the timer app, but also in the calculator app. Another way is to use Siri/Voice control instead of the shut down screen. Then you could apply the same “magic double tap” between Siri/Voice control and the calculator app.


Temporary fix
So what could we do to protect ourselves until Apple releases a patch for this bug? You can shut off access to the control center while locked if you go to Settings -> Control Center -> Access on Lock Screen Read more

Written by: Mathias
Back to Detectify Blog
Back to Detectify


Server-side Javascript Injections and more!

Today’s updates fills the needs of many of you out there! You asked for it, and now it’s in the Detectify engine! Here’s a breakdown on the stuff we’ve put in:

Verify your domain with Google Analytics

Having trouble editing your code? Don’t want to upload files? No problem! You can now verify the ownership of your domain using your Google Analytics account. Try it out in the dashboard or in the sign up!

National Vulnerability Database

Our fingerprinting has been extended using the U.S national vulnerability database. Detectify will now try known vulnerabilities based on the versions we fingerprint from your domain. We will then warn you when there’s a security issue in the version you’re using.

Server-Side JavaScript Injections

We have now a feature for pentesting server side JavaScript. That means, we are able to find NoSQL injections in MongoDB, code execution flaws in Node.JS and other flaws in exotic server-side JS-technologies.


Our site at now sends a Content-Security-Policy header. Content-Security-Policy is a header for security that allows website owners to declare from what sources the users may load content from. Read more about Content Security Policy here.

Download report

Yes, we finally added support for downloading your reports as PDF files, conveniently making them accessible offline. The design of the PDF reports is still in progress and if you have any issues/suggestions, feel free to mail us about it at!

Sign up


HTTP Strict Transport Security (HSTS)

HTTP Strict Transport Security, or just HSTS, is a security mechanism for websites and browsers. HSTS is used when web servers want to tell its clients that they should only use HTTPS, and not HTTP.

This mechanism is useful, because loads and loads of websites have a lazy encryption discipline, and while most of the website is loaded with HTTPS, some resources, such as JavaScript, plugin resources and images are loaded with plain HTTP. This is horrendous. Using HTTPS is nearly pointless if there are resources on the website that are loaded with plain HTTP, since the session cookies will be sent with every request, including the plain HTTP ones. This may lead to session hijacking (unless, of course, the cookies have been flagged as “Secure”).

HSTS is initiated by the server in its Strict-Transport-Security header, which also specifies how long it will take before the HSTS header expires. After this, the website is considered non-HSTS by the browser unless another HSTS header is received.

If, for example, Alice had been using Firefox or Chrome, which both have a preloaded list of websites employing HSTS, she would have been a lot better off. (Given that the website was using HSTS and was mentioned in the preloaded HSTS list.) Everything would’ve been transferred through HTTPS. This is a nice way to prevent SSL stripping attacks. SSL stripping is a method which enables the attacker to conduct Man-In-The-Middle-Attacks unencrypted by moving the SSL “end point” from the user (Alice) to the man in the middle (Eve), while the traffic between Eve and the web server is encrypted. This way, the web server thinks everything is okay, and Alice has no idea that the traffic is supposed to be encrypted unless the web server uses HSTS. If HSTS is used, Alice’s browser knows that the traffic is supposed to be encrypted, and can therefore determine that something fishy is going on when it isn’t.

If you feel like going more in-depth, you may read RFC6797, which can be found at the IETF website or if you wish to integrate HSTS on your website, then check out this OWASP page regarding the subject.

Do you care about security? Try detectify!

Written by: Håkon Vågsether.