« Cucumber Season in WebAppSec Land? | Main | Favorites Gone Wild »

October 07, 2007



"Cenzic Hailstorm and Hewlett-Packard WebInspect (post-update) were capable of analyzing and detecting vulnerabilities in the Ajax application, albeit only when we manually walked them through the relevant bits."

If you don't walk through and set the traversal manually for that, how do you know the scanner is actually spidering successfully?


I have a different question for you -

How do you know that you (as a human) covered the entire application, when you browse it manually? (consider a VERY large application, containing thousands of web pages, and numerous functionalities)...

A good scanner will give you proper visualization and statistics regarding site structure, pages, HTTP requests, parameters and cookies, so that you can assess (as a human), that it covered the parts of the application that you are interested in testing.

Romain Gaucher

Spidering is something, but understanding is also important. How do you know that the tool fully understood the rich content and did a real assessement?


Hi Romain,

I guess the first question I have, is what does "understanding the application" really means?

does one needs to fully understand the context and functionality of a software in order to be able to test it?

I guess the answer lies in the type of testing you wish to perform. Each type of test requires a different level of analysis and understanding of the application.

Testing for "low hanging fruit" vulnerabilities, which is something that automated scanners are best at, doesn't necessarily mean that you must understand the application like a human does. Sometimes, all you need is to locate the inputs/outputs of the application, and to be able to "understand" or validate the results.

I do agree that certain vulnerabilities (e.g. what people like to call Logical Vulnerabilities), require more human-like attributes, in order to be able to locate (or exploit) them. But, I do believe that automated tools can assist in such scenarios as well (i.e. semi-automatic assessments).

In addition, I do believe that with proper algorithms, heuristics and a bit of innovation, in the future, automated scanners will be able to assist human analysts, more than they do today. Our team has been spending a lot of time in researching such directions, and you can already see such things in AppScan today.

And BTW - did you read my disclaimer? :-)

Romain Gaucher

Actually, I was jumping on what Marcin said. How do you know that the tool did the job? It's not because the tool is not reporting anything that you are secure ;)
It's not even because the tool spidered the scripts that it was able to understand it properly...

btw, I did read the disclaimer and I do agree with your post Man vs. Machine


Lets see if I understand your question - are you asking: "how do I know that the tool located all of the vulnerabilities that exist in the application?"

That's a tough question to answer. Actually, I don't think a human can know for certain that he/she located all of the vulnerabilities, by manually testing the application (even by doing manual code review).

Romain Gaucher

Well, I was more talking about understanding rich contents: Flash, JavaScript, etc.

Let's say I have a nice Ajax application, AppScan is able to read it, parse the javascript etc.
How do I know that AppScan understand my javascript properly? If there is some obscure code that it cannot understand (too recursive, totally dynamic,.. ?) will it report it or just don't tell me that it cannot do anything with the script here?

Hope you understand better my thoughts... :X



Assuming you are referring to traversing heavily JavaScripted web applications, then AppScan actually executes the JavaScript code, raises events, and performs actions as if it was a real human user. The output (e.g links, requests, etc.) is then collected and analyzed.

There's no need for AppScan to "understand" your obfuscated JS code. As long as a browser can execute it, AppScan can do the same.


@Marcin: By monitoring the webserver logs to check for coverage of particular urls after an automated spider.

Also, I did both manual crawls and automated crawls with all products, so it was rather easy to see the differences in findings that way as well.

@Ory: Yup! There are downsides to both methodologies -- I think everybody agrees that the best results require both man and machine to get the job done.

Of course, if all the apps had been java based (only one was), Fortify's Tracer might have been a good way to quantify coverage:


Romain Gaucher

So, if, somewhere in my js code, I have an async call which needs a condition which is never reached in my web application, AppScan will not test for that?

function foo(bar = 1) {
if (bar==1) {
// normal use
else {
// this is the only place where 'special/error.py' occurs myxhr('special/error.py', globalPOST);

I all my code I only use foo() with no specified parameters. Will AppScan be able to find 'special/error.py' and test it?
If so, I really want to test this -- will need to ask a new version of AppScan before.

The comments to this entry are closed.

Follow us on Twitter

AppScan Free Trial

Try IBM Security AppScan software at no charge.

Become a Fan