6 min read

The purpose of this post is to help clients better prepare, digest and act upon the results of a web application penetration test.

Subscribe to our Blog

A large amount of the penetration tests we run at Foregenix are web application tests (‘webapp pentest’ from now on), for simplicity I am including web service and general API tests in that broad category. While it may sound like a cliché, each webapp pentest has the potential to be quite different from the one you ran a couple of weeks ago. A large contributing factor to that is the purpose the application serves. In a well-designed webapp pentest the purpose of the application should be factored in the scope and suitable tests should be performed by the tester when the time to execute the penetration test comes.

If one looks at it from a mile-high view, a traditional webapp pentest consists of a few distinct phases:

  1. Scoping phase
  2. Application assessment
  3. Prioritisation of findings based on vulnerability, risk and exploitability
  4. Remediation of select findings
  5. Validation of corrective actions

Let's look at each phase in detail:

  1. Scoping

Scoping involves defining the playfield of the webapp pentest. Ideally this is set via a series of well-defined activities, starting with filling in a questionnaire as accurately as possible to provide the grounds of a subsequent discussion. This will provide us with some indication of the size of the web application and its overall structure in terms of authorisation and data access and assurance to the client that all key areas of the application will be tested accordingly. During these sessions any additional business logic rules the application imposes – or serves – should also be discussed. This will allow the tester to allocate sufficient time on checking the correct implementation - and susceptibility to bypass – of those business rules. Business logic can be associated with the general access control model of the application as well with pure business rules imposed by the application such as the calculation of the shipping cost, defining the price for a good, logging of actions, etc.

Not performing a thorough scoping exercise indicates that the foundations of your pentest are weak, and that your investment on testing is likely going to be wasted, rendering in the best case a false feeling of ‘safety’..

  1. Application assessment

The pentesting team will take the application apart and start testing its susceptibility to vulnerabilities. Normally a small degree of automation is acceptable in this stage. Automation is typically realised in the form of some sort of scanner that can be used to identify purely technical (and often obvious) vulnerabilities, like SQL injection and cross site scripting. Having said that, this does not mean that any person that has a web application scanner can point it to a web application and detect these vulnerabilities; while this may hold true in some cases it does not in the majority of them. Imagine an application that uses a custom error page with nothing other an apology for the inconvenience. The scanner will need to know what that page is and any keywords it contains so that erroneous conditions on the application can be identified. There could also be cases, and we have come across a few of them in the past, where the application cannot be scanned for a number of reasons. Not identifying this and just pointing a scanner to the application will lead to a false sense of security. Finally, there are many areas in which a scanner is simply not well equipped to test for vulnerabilities and require manual intervention. Authorisation (vertical and horizontal privilege escalation) and business logic vulnerabilities are a huge chunk of that bulk. To test for those the tester should request – and must receive – multiple accounts with the relevant profile options enabled and containing a full set of data (there is no security in having a solid homebanking webapp that can withstand Internet scans on a daily basis without blinking, but that then allows an authenticated user to transfer money from other customers to his account, right?)


  1. Prioritisation of findings based on vulnerability, risk and exploitability
"Speaking of SQL injection and exploitability, from a personal perspective, I am in a love-hate relationship with sqlmap at the moment. It is a great tool that industrialised SQL injection exploitation, but at the same time it took background knowledge out of the penetration testers’ head. My experience with sqlmap is that is works perfectly about 30% of the time and an extra 5-10% with tamper scripts. This leaves a 60-65% margin to drive a truck through and begs the question “What happens with that?”. Is it reported but categorised a non-exploitable? Is it left out as a false positive in fear of client push back? Is it really not exploitable? A lot of that comes down to the experience of the penetration testing team performing the test. We will be covering one such case in an upcoming blog post."


Again, this step of an application assessment falls on the penetration testers. As I often say, not all vulnerabilities are created equal. There are vulnerabilities that when exploited may have a direct effect on the security of the entire application, there are others that may affect only a related component of the application, there are information leakage vulnerabilities, etc. In addition there are vulnerabilities that are directly exploitable and there are vulnerabilities that have some limiting factor that prevents them from being exploited at all. An example of the latter can be thought of a SQL injection vulnerability on an HTTP parameter that has a 10 character – server imposed – limit. While the vulnerability in this case is there, the 10 character limit prevents it from ever being exploited, which we guess is why it is still there. So exploitability should also play an important role in rating and prioritising vulnerabilities. 


  1. Remediation of select findings

So, at this point the client has received a penetration testing report and focus is shifted to that part of the equation. Experience shows that the worst ways to use that report are:

  • Use it to “Penetrate and Patch”, and add spot checks to counter all identified vulnerabilities.
  • Consider the report within a compliance framework, and remediate all high risk findings because a compliance directive mandates so.

Ideally, the findings on the report should be triaged against the source code of the application and the root cause of each vulnerability should be identified. Clients need to understand that a penetration test is snapshot-in-time kind of test, it tests the security of the application in a given time. It is also limited by the functionality and data available to the user profile that is provided to the tester. In heavily user segregated applications or in applications that “open” functionality based on data on users profile, e.g. a Loans menu on a banking application that is only available if the user has a loan attached to his profile, it can leave parts of the application untouched. This is the argument behind requesting user accounts of multiple levels each containing a full set of data available to them that we talked about in point 2 above. By triaging the findings against the application codebase, areas that are vulnerable but have not been included on the penetration testing report - for whatever reason - can be identified and patches can be deployed in key points to counter attacks against the full breadth of the application versus specific spot checks. However, the most important outcome is the learning process the developers go through: they can now identify bad coding patterns, issues with the way functions are used, etc. and their impact on the security of the application… and hopefully prevent these from appearing again in the future.

  1. Validation of corrective actions

At this point the focus gets back to the penetration testing team in order to identify the effectiveness and susceptibility to bypass of the applied patches against the specific findings detailed on the penetration testing report. A funny story from way, way back: a developer applied specific black list filter to counter the requests presented in PoCs in the report effectively blocking the SQL injection string and 1=1-- but leaving and 2=2-- against a blind SQL injection vulnerability.

Steps 4 and 5 above can be run in multiple iterations, addressing the most critical vulnerabilities at first while planning to address the rest ones in due time.

When all is set and done the security of the application would be at a much greater level and the client will gained the maximum out of his penetration testing engagement.

Read More: Penetration Testing - The Quest for Fully UnDetectable Malware

View Penetration Testing Services


Contact Us

Access cybersecurity advisory services



Subscribe to our blog

Security never stops. Get the most up-to-date information by subscribing to the Foregenix blog.