Getting an application to market is challenging. Developers often work under tight deadlines and heavy pressure. But application security requires secure coding, then code vetting, quality and security checks, and testing again before app release. Turning to outside testers can help, but you first need to communicate your application complexity.
Application penetration testing seeks to identify vulnerabilities before hackers discover and exploit them. This testing reveals real-world opportunities to compromise and gain unauthorized access to sensitive data or even take-over systems for malicious/non-business purposes.
Penetration testing offers a view of the level of risk for your organization and should offer recommendations to prioritize addressing the identified application flaws.
Application penetration testing can be manual or automated. RedTeam Security approach consists of about 80% manual testing and about 20% automated testing. While automated testing can be efficient in the initial phases, an effective and comprehensive penetration test can only be realized through rigorous manual testing techniques.
Now that you have an understanding of the what and why of application penetration testing, you'll need to be prepared for the actual testing process. Bringing in a third party can help. After all, your own people have been working hard to develop and troubleshoot your app, but they know it so well they are dreaming its code (if they ever get a chance to sleep). Fresh eyes can help bring a more objective perspective.
Yet, for a successful pen test, you'll need to understand the application's complexity and be able to convey that to your testers. The problem is often that there are an extreme number of variables at play in trying to model any given application.
Nevertheless, we'll consider some of the standard ways people talk about application complexity.
One approach sees the penetration testers asking about the number of static and dynamic pages in the application. Dynamic pages are pages where the user supplies some manner of input to the app; whereas, static pages have content that does not change.
While this method is often appropriate and successful, it doesn't always work. Lines of code don't equate to complexity in any reliable way.
At the same time, there are some apps that are single page applications. In these cases, it doesn't make sense to talk about the number of static or dynamic pages. Instead, testers will typically shift to asking about functionality and counting functionality in some more meaningful way.
This might mean talking about the number of screens when referencing content. Or, since single page apps are primarily API driven, testers could try to estimate complexity by discussing the number of unique API requests used to serve content to the app.
Most of the time, a penetration tester will begin by asking about static and dynamic pages. If the organization cannot answer that question, the tester will ask about functionality or invite the clients to express application complexity in whatever way makes sense to them. The issue with this is that what makes sense to the application owner may not always translate to something that resonates with the tester evaluating complexity.
Part of the issue here is that there are many over-loaded terms in the application complexity conversation. Basically, we don't always mean the same things when we say the same words. For example, a "dynamic page" in RedTeam Security context refers to pages that have user-supplied content (as identified above). Yet, in a Development context, a dynamic page is determined primarily by the way the page changes based on information (content) it receives from the server.
Getting on the same page in terms of vocabulary can be further complicated by the fact that project managers (or others lacking deep technical details of the application) are often the ones talking to the consulting company.
As RedTeam Security Meeghan McGahan puts it, "The result is like having one group being French-speaking and the other being German-speaking while attempting to communicate detailed technical issues in their limited English. They have to find common ground for understanding." – Meeghan McGahan, VP of Operations
Collecting all of the varied data values to model application complexity is challenging — and time-consuming. Sometimes it's simply easier to demo the application, but that is time-intensive too.
The app demo is typically done with the client in a web-based screen share session. Occasionally, the client has an environment available where testers can evaluate the complexity themselves. A demo will get the pen testers close enough to application complexity to estimate level of effort (LOE) and create a proposal.