CompTIA PenTest+ Chapter 1 : Planning and Scoping

Table of contents

1.1 Explain the importance of planning for an engagement
1.2 Explain Key Legal concepts
1.3 Explain the importance of scoping an engagement properly
1.4 Explain the key-aspects of compliance-based assessments

The CompTIA PenTest+ exam divides the penetration testing process into four stages,

pen-test-process

And these are the tools noted to be included within the exam itself.

ScannersCredential Testing ToolsOSINT
NiktoHashcatWHOIS
OpenVASMedusaNslookup
sqlmapHydraFOCA
NessusCeWLtheHarvester
NmapJohn the RipperShodan
Web ProxiesCain and AbelMaltego
OWASP ZapMimikatzRecon-ng
Burp suitePatatorCensys
Software AssuranceDirbusterSocial Engineering Tools
Findbugs/find-sec-bugsW3AFSET
PeachRemote Access ToolsBeEF
AFLSecure Shell (SSH)Debuggers
SonarQubeNcatOllyDbg
YASCANETCATImmunity Debugger
WirelessProxychainsGDB
Aircrack-ngMobile ToolsWinDbg
KismetDrozerIDA
WiFiteAPKX / APK StudioMiscellaneous Tools
Networking ToolsSearchSploit
WiresharkPowerSploit
HpingResponder
Impacket
Empire
Metasploit

Scanners

  • Nessus is a commercial vulnerability scanning tool used to scan a wide variety of devices.
  • OpenVAS is an open-source alternative to commercial tools such as Nessus. OpenVAS also performs network vulnerability scans.
  • Sqlmap is an open-source tool used to automate SQL injection attacks against web applications with database backends.
  • Nikto and W3AF are open-source web application vulnerability scanners.
  • Nmap is the most widely used network port scanner and is a part of almost every cybersecurity professional’s toolkit.

Credential Testing Tools

  • Hashcat, John the Ripper, Hydra, Medusa, Patator, and Cain and Abel are password cracking tools used to reverse engineer hashed passwords stored in files.
  • CeWL is a custom wordlist generator that searches websites for keywords that may be used in password guessing attacks.
  • Mimikatzretrieves sensitive credential information from memory on Windows systems.
  • DirBuster is a brute-forcing tool used to enumerate files and directories on a web server.

OSINT:

  • WHOIS tools gather information from public records about domain ownership.
  • Nslookup tools help identify the IP addresses associated with an organisation.
  • theHarvester scours search engines and other resources to find email addresses, employee names, and infrastructure details about an organisation.
  • Recon-ng is a modular web reconnaissance framework that organises and manages OSINT work.
  • Censys is a web-based tool that probes IP addresses across the Internet and then provides penetration testers with access to that information through a search engine.
  • FOCA (Fingerprinting Organisations with Collected Archives) is an open-source tool used to find metadata within Office documents, PDFs, and other common file formats.
  • Shodan is a specialised search engine to provide discovery of vulnerable Internet of Things (IoT) devices from public sources.
  • Maltego is a commercial product that assists with the visualisation of data gathered from OSINT efforts.

Web Proxies

  • OWASP Zap is a free , easy to use web proxy which sits in between you and the browser, and it can show you the headers of requests, responses made between you - the client - and the server. It can also web crawl.
  • Burp suite offers the same set of tools as above, and it also has a premium version which lets you start projects, and comes with a greater range of tools.

Software Assurance (Fuzzers and Static Code Analysers)

  • In addition to debuggers, penetration testers also make use of other software assurance and testing tools. Some that you’ll need to be familiar with for the exam include: FindBugs and find-sec-bugs are Java software testing tools that perform static analysis of code.
  • Peach and AFL are fuzzing tools that generate artificial input designed to test applications.
  • SonarQube is an open-source continuous inspection tool for software testing.
  • YASCA (Yet Another Source Code Analyser) is another open-source software testing tool that includes scanners for a wide variety of languages. YASCA leverages FindBugs, among other tools. You’ll learn more about each of these tools in Chapter 9, “Exploiting Application Vulnerabilities.”

Social Engineering Tools

  • The Social Engineer Toolkit (SET) provides a framework for automating the social engineering process, including sending spear phishing messages, hosting fake websites, and collecting credentials.
  • Similarly, the Browser Exploitation Framework (BeEF) provides an automated toolkit for using social engineering to take over a victim’s web browser.

Remote Access

  • Secure Shell (SSH) provides secure encrypted connections between systems.
  • Ncat and Netcat provide an easy way to read and write data over network connections.
  • Proxychains allows testers to force connections through a proxy server where they may be inspected and altered before being passed on to their final destination.

Debuggers

  • Debugging tools provide insight into software and assist with reverse engineering activities. Penetration testers preparing for the exam should be familiar with five debugging tools: Immunity Debugger is designed specifically to support penetration testing and the reverse engineering of malware.
  • GDB is a widely used open-source debugger for Linux that works with a variety of programming languages.
  • OllyDbg is a Windows debugger that works on binary code at the assembly language level.
  • WinDbg is another Windows-specific debugging tool that was created by Microsoft.
  • IDA is a commercial debugging tool that works on Windows, Mac, and Linux platforms.

In addition to decompiling traditional applications, penetration testers also may find themselves attempting to exploit vulnerabilities on mobile devices. You should be familiar with three mobile device security tools for the exam.

Mobile Tools

  • Drozer is a security audit and attack framework for Android devices and apps.
  • APKX and APK Studio decompile Android application packages (APKs).

Wireless

  • Aircrack-ng . This is the general WiFi tool of choice for hackers as it offers the ability to crack WEP, WPA, WPA2 networks (albeit weak ones).
  • Kismet. Does network reconnaissance and can show the relationships between different networks, and nodes on those networks.
  • WiFite. This is a blunter tool, though it is easy to use and because of its power it is very often used.

Miscellaneous Tools

  • Metasploit is, by far, the most popular exploitation framework and supports thousands of plug-ins covering different exploits.
  • SearchSploit is a command-line tool that allows you to search through a database of known exploits.
  • PowerSploit and Empire are Windows-centric sets of PowerShell scripts that may be used to automate penetration testing tasks.
  • Responder is a toolkit used to answer NetBIOS queries from Windows systems on a network.
  • Impacket is a set of network tools that provide low-level access to network protocols.

Networking

  • Wireshark is a protocol analyser that allows penetration testers to eavesdrop on and dissect network traffic.
  • Hping is a command-line tool that allows testers to artificially generate network traffic.

1.0 Introductory

Before we jump into the nitty gritty of each chapter - it is important to understand what exactly we’re aiming towards, what a penetration tester actually is and what is the mindset you should have as a penetration tester.

The jump from normal system administrator , who may use tools like netcat, nmap and the like and a network penetration tester is that the latter will seek to use these tools for the purpose of finding weaknesses, to circumvent the blanket of security and to perform unauthorised activities - all through the binding legal contract which explicitly declares consent to breach infrastructure. Without the that you’re just a criminal.

When I first saw the description of a pen tester it’s quite similar to being an archaeologist: You don’t really know what’s in this piece of rock, but when selecting the right tools you can start to make breakthroughs, and you can surmise more about the rock itself - the date which it formed, the type etc. Over time you will enough of a portrait so as to completely understand the rock itself. The only thing which does differ really is the intention, we seek to break this rock!

Well, there are three main goals for a penetration test during an “excavation”:

  • Seek unauthorised access to information or system. This would be a breach in confidentiality and this is called a disclosure attack. Typically in beginner-level systems that are used for practice the aim will be to find an admin’s password or to achieve administrator level privileges.
  • We could also seek to modify information or systems. Making botched configuration files , replacing certain binaries , changing the hashed password of an account are all to do with the weak integrity of the system. Such attacks are called an alteration attack.
  • And finally, we could compromise, modify or we could stop making that information or system available. When we mess with the availability this is called a denial attack , with the most notorious form of attack being the DDoS attack.

DAD-CIA-triad

Alongside this general concept is the one of the different functions of data, that a company can be in any one or more of these states when owning data -

  • storing data , in archives or databases.
  • processing data and apply business logic to it.
  • transmitting data over the wire.

These ideas are the cornerstone of cyber security , and in the ideal world we could completely secure all each aspect of the triad - but it usually that we end up compromising on confidentiality and availability - if you open yourself up , it may be that you get hit. Speaking of boxing analogies , there is a distinct difference in the mindset of a cyber professional who is defending a system versus who is attacking - put the former into the shoes of Louis Ortiz and the latter into the boots of Deontay Wilder: Ortiz has to be perfect for twelve rounds, and cannot afford a single mistake, whereas Wilder only has to find one fault.

If an attacker can find just one vulnerability in a server, employee computer , a gap in employee practice then it’s over…

Moreover, due to the ontological issue of compromising and the fact we cannot be perfect - we begin to categorise. More often then not, this is perfectly fine as a lot of vulnerabilities are too bad, they ebb just slightly into the disclosure or the alteration side of things , but must companies cannot accept being unavailable. They often draw the line here and create many defences as possible to stop (Distributed) Denial of Service ((D)DoS) attacks.

After categorising they may invest in equipment to tackle the top five most compromising vulnerabilities , so if they were having a lot of physical attacks against their store , then they would add cameras, security guards and give staff the right RFID badges if need be. It’s about understanding exactly what the attack vectors are: understanding exactly what is available and visible to the outside world (hopefully all information and systems were intentionally public), and what we need to hide. Integrity is the substance which dictates the degree to which we need a proportion hidden versus public, though the sensitivity of data as well - is a massive factor.

So penetration testers are basically here to simulate what the bad guys will try and do, and seeing if the procedures and security controls were actually the above - then we could try our luck and try to forge an RFID badge, can we get away with just wearing a mask ? These sorts of ideas, being incredibly pedantic and never-letting go of even the minutest of flaws… In more technological assessments we can learn to create blueprints and articulate the different layers of infrastructure - what we can easily secure versus what is more exposed.

Often times when new services get rolled by companies out security is just an afterthought - deadlines are simply too tight , a notable case being the new CyberPunk release… Other industries which care a little more for their customers - maybe because they have to by law - have a mandatory penetration test and security evaluation quarterly, bi-annually or annually - depending on the company and industry specifically. The financial services industry is a big one, where they have the Payment Card Industry Data Security Standard (PCI DSS). This is a necessary shackle which comes with any company involved in the storage , processing of transmission of credit and debit card information - so the controlling of those three aspects of data handling we discussed earlier. The standard aims to secure the Cardholder Data Environments (CDE) which is the repository itself for the financial information. The PCI DSS has some good guidelines - such as:

  • Perform external penetration testing at least annually and after any significant infrastructure or application upgrade or modification (such as an operating system upgrade, a sub-network added to the environment, or a web server added to the environment).
  • Perform internal penetration testing at least annually and after any significant infrastructure or application upgrade or modification (such as an operating system upgrade, a sub-network added to the environment, or a web server added to the environment).
  • Exploitable vulnerabilities found during penetration testing are corrected and the testing is repeated to verify the corrections.
  • If segmentation is used to isolate the CDE from other networks, perform penetration tests at least annually and after any changes to segmentation controls/methods to verify that the segmentation methods are operational and effective, and isolate all out-of-scope systems from systems in the CDE

The people able to conduct these tests are either internal penetration testers or consultancies, outsider companies or parent companies. Internal teams are , ideally , separate from other tech departments that maintain and serve products - but often times in smaller companies the roles mesh. The teams may be convened to perform tests every week, or only to fulfil industry standards. If there is enough work to keep the team busy, which would imply the company is both of a large size and it handles sensitive data , then it would mean we need people to become subject matter experts on the company - which would save tons of time rather than having to re-teach every consultancy which walks through the door; however, being that these teams are usually the ones which instantiate certain security controls they may have a bias towards conventional thinking of the control and a degree of familiarity which evades creativity - we need oddball methods that an attacker would use.

Internal penetration tests can either mean:

  • Done by an internal team of employees
  • Done on the internal network which shouldn’t be available to the outer world.

So take note of the context when such a term is used.

Likewise we also have external penetration tests, which is antithetical in both senses of the word too, and it is where we use a consulting firm, or a freelancer etc. This is where we could higher a seriously talented hacker , with fresh eyes on the system - unless his company has helped in part to setup the organisation - which steps into the same bias as the internal … It would be weird if they reported many problems with the infrastructure they helped design - what would that do for reputation, likewise it would be equally shady if they reported nothing was wrong - even if there really was no red flags … What would be great is for the external pen tester to spend as little time as possible ironing out the needs - which should hopefully be clearly defined , and instead of getting to work , and utilising their skills and tools which may not be available internally. Agencies usually can afford to have a wide-variety in their talent pool , which means there is a mix of incredibly nice skill-sets and incredibly broad skill-sets; moreover, attack techniques evolve over time , and if the client themselves doesn’t have the resources or the time to evolve with the landscape then they should be able to put their trust in an external team which should have their finger on the pulse of the latest trends.

The Cyber Kill Chain Model

This is a more sophisticated model than the traditional four part sections than CompTIA uses of

  • Planning and scoping
  • Vulnerability Identification and Information Gathering
  • Attacking and Exploitation
  • Reporting and communicating results

Whilst this is a good overview, there are many large sub-components which we need to take into account, for example how do we attack, and there are the stages in which we plan out our attack: from setting up the code which we hope will jog the system, or the phishing email we draft which will be used against the target. This development is called weaponisation. After this we have to deliver the crucial payload or input which may jog a system which is the delivery stage. So you can see there is more than meets the eye to the attacking and exploitation step for example.

difference-in-their-models

These lines are meant to represent the bounds of the topics, so reconnaissance is only within the first phase of gathering information and building up a catalogue of accounts, IPs, vulnerabilities etc. Section two through seven gives us much more in-depth explanation of what actually happens under “Attacking and exploiting”…

1.1 Explain the importance of planning for an engagement

The first step in any major penetration test is to establish an agreement between the company having the test done, and the person or company that will be performing the test. Many many things need to be established at this stage, for legal reasons, for scope, the duration , the infrastructure that can and can’t be targeted etc.

The scope of the penetration test is the first formal section of an agreement - specifying what penetration testers will do (web application hacking for example) and how their time will be spent. The first thing a business will need to get clear to external pen testing companies is why they’re having the test done: maybe they are a financial organisation which need to their annual check on their systems, maybe it’s a company that have had their first breach and they need to make sure it doesn’t happen again… Often times it is just to meet some sort of internal or external regulation/compliance. Next they need to define what systems, networks, or services should be tested and when. When can become important as they may plan for things like unexpected outages, so away from periods of high traffic. What information can and cannot be accessed, what the rules of engagement are - so how we will compromise a system , how aggressive is the test itself toward infrastructure. Maybe a company will go as far to say what techniques are forbidden, and to whom the final report will be presented.

The gold standard guide for penetration testers can be found at https://pentest-standard.org. They have information on pre-engagement interactions like those covered in this chapter as well as detailed breakdowns of intelligence gathering, threat modelling , vulnerability analysis , exploitation and post-exploitation activities and reporting. The team that built it also created a technical guideline that can be useful, although some of the material is slightly dated.

Assessments

Assessments specify the test’s purpose. The type of assessment should indicate whether or not the company is doing it based on their own internal standards, like an objectives-based assessment would be- or whether they have to do it based on compliance with their own internal goals , whereas a compliance-based assessment would be more to do with external regulations. Both are concerned with the security of the product, just from different types of view, although they both want to see it secured - whereas a red team assessment is just a team of ethical hackers , trying to get in the minds of actual villains. Their goal is to compromise a particular system, get access to some data , with no care for whether or not it is secured or whatnot , just that it survives a blast from the red team. This does mean though that red team reports would be as complete, and won’t aim to bring a level of comprehension to management that the others would. Ultimately their use lies in being a security exercise and validating security designs and practices. Sometimes there are exercises in which the red team will practice with a blue team, people whose aim is to defend against the red team and this builds up the skills of both sides.

Looking more in depth at each assessment type now, let’s start with goals-based. This is where we set up “missions” that we want to achieve, like making sure a particular department can survive network-based attacks, and that no employee accounts become compromised. That’s a goal for example. An objective’s based assessment is where you define a resource to be hit, so a particular web server or database and the client essentially says - “Look, there’s your target, hit it any which way you can, we want to see if it falls”. The main difference between the two is that we only need to hit the goal once, and that may only be via one method, but with objectives-based testing we may compromise the system many different ways and so it’s a battle between breadth versus depth.

Rarer but still important , are the pre-merger penetration tests which can be demanded by parent firms who are going to acquire a company, to make sure they are not combining a liability into their infrastructure. This will be used to identify the current state of security of that company, and hence the steps that need to be taken (if any) so it harmonises with the procedures of the parent.

This goes hand in hand with supply chain assessments , which is where a business will look at the factories that make their product, the manufacturing line, the internal business practices and any other external business functions to tune them up appropriately and put them under technical strain to test standards. Partners to a company often times provide hardware and software with an organisation, now it’s the testers job to verify their security standards.

Lastly are compliance based assessments, which I will cover in much more detail in it’s own section - 1.4.

Target Selection

Now that we’ve established the assessment type, this will narrow down the targets (scope) of our tests, because if it was an internal goals-based assessment then we would be looking at internal infrastructure , we may target users, we may go in for physical attacks etc. Supply-chain assessments are more likely to be external, internal and external being the arrows for which the rest of the attributes follow, but there are all sorts. We could target SSIDs and try to gain a network that way via spoofing, or we can go and plant Raspberry Pi’s everywhere, it depends. But the degree to which we know about the system, and can hence plan about our targets is demarcated by how much knowledge the tester is given.

White box vs Grey box vs Black box

Going from most to least:

  • White box. Where they are essentially full-knowledge tests, and external groups get all the reports and data necessary and they act as assistance to clearing hurdles to get to the actual target. Testers may be given information on networks, network diagrams, lists of systems (possible with credentials) and their languages, language versions, and IP network ranges. Testers spend less time on finding a way in and more time on cracking the actual product, and the company wants to see if all of their internal infrastructure is susceptible once someone has broken in. But, seeing as we have skipped the actual “breaking of the door” so to speak, maybe our bastion defences are actually really strong (or really weak!) but a white box test won’t cover this.
  • Grey box is a step down in knowledge given. It may provide some information on particular systems, but whether or not it provides IP ranges, languages, etc is up to the company. Usually a grey box is done to gather knowledge about an environment and if it can survive a more specialised and informed hacker. Grey box tests become more specific as a tester puts their efforts into that which they know most of, and it is probably the most useful and representative of an actual attack - as they will be capable enough to extract some information, now we want to predict what they do with it.
  • Lastly it’s the black box testing. This is zero knowledge , and it’s supposed to simulate what a distant hacker would encounter, and its supposed to be a surprise to most parties. They must gather information , discover vulnerabilities and make their way through company infrastructure , or a particular system (this is where the assessment type comes in, along with defined scoping). What the company wants to see here is how much can an attacker do from the outside, as opposed to being within internal networks. Many hurdles the company wants to see how they are overcome, and this is where the skill of the tester comes in. If they don’t know the craft then relatively little will be disclosed, but if the company is preparing for a highly skilled threat actor, like organised crime or even nation state, then you can safely assume they will gather a lot more information, know a lot more, and its almost as if they’ll be attacking from a grey (hopefully not white) perspective.

If you are doing a black box test you have to ask - “Why would someone have a go at us”? Are you harbouring sensitive data, are you a political target etc. How we answer that question will determine the type of assessment, and the type of box test.

threat-actor-tier-list

Rules of engagement

Once you have determined the type of assessment and the level of knowledge testers will be given, we have demarcate enough about the business to tell a tester how to engage it. These are the rules of engagement (RoE) as the manner in which the services we specified might have ramifications when attacked with too must gusto, or we have specific time allocated for the attack etc.

  • The timeline for the engagement and the hours in which testing can be conducted. Some assessments have schedules for noncritical time frames to minimise the risk of service outages as I mentioned above , while others may be scheduled during normal business hours to help test the organisation’s tolerance to attacks.
  • What locations , systems, applications, or other potential targets are included or excluded. This should highlight the impacts of tests or any third-party providers like AWS hosting for example, Internet Service Providers or outsourced security monitoring services.
  • Data handling requirements for information gathered during the penetration test. This is particularly important when engagements cover sensitive organisational data or systems. Penetration tests cannot disclose sensitive customer data if a database happens to be breached, and a company has to include this PII clause in there - even more so for PHI ! Requirements for handling often include confidentiality requirements for the findings, so system credentials, employee emails etc. Data should be handled via encryption , storage and possibly a chain of custody.
  • Known versus Unknowns that you will encounter over the course of your test and if there is any caution - everything you find in a particular department is safe to hit, but something that you didn’t expect I would probably confirm with the client first…

Resources and Budgets

Depending on the type of test, say white box, penetration testers can take advantage of internal documentation on web servers, database servers, file servers etc to help plan their testing and to massively accelerate the information gathering and reconnaissance phases. While there are a multitude of possible documents that a business could keep in relation to their infrastructure, the PenTest+ exam calls out specifically:

  • Documentation on Servers, APIs, SDKs , Internal documentation.
  • Access and Accounts
  • Budget

Documentation

The docs that an organisation creates and maintains to support its infrastructure and services are incredibly useful, and we shall start with the many forms of documentation which cover Web Services. XML documentation that describes Web Services can be done using the Web Services Description Language (WSDL) which is a standard that lists the different functions of a given web service. It gives a description of the endpoints, each function, the types of inputs and outputs expected for each resource.

Before I explain the differences between WSDL and WADL it is paramount you first understand the importance of SOAP and REST. These two are formats which sit above HTTP and help developers create a standard by which other developers, programs etc that use their service can understand. The Simple Object Access Protocol (SOAP) was developed by Microsoft and was the first push at giving organisations some control. SOAP relies exclusively on XML to provide messaging to clients/servers, but the problem is because it is so descriptive , many descriptions start to accumulate, as they cover pretty much everything. SOAP doesn’t like surprises, and it requires you to specify the types of the data you will be expecting as input, and what you’re giving as output, and whilst this error-handling feature is good for big enterprises with many products, for the smaller company, and those just starting a blogging site, all these technical add-ons become bloat.

REpresentational State Transfer (REST) is a lighter-weight alternative to this, and keeps it simple and introduces the notion of endpoints (URLs) which clients may use and so stays close to the bare-bones of HTTP. Don’t think of either of these two as libraries, they’re more standards that are baked into all the functions that are public facing, a convention. The conventions of REST aren’t clung to XML either, and we can send JSON, CSV, RSS over the wire if we wanted.

Bringing this back now, WSDL is the descriptive layer which sits on top of SOAP and aims to make it easier for clients, servers , developer IDE’s to understand the services the server is offering. It gives an overview of the types of messages that the service sends, and because it is programming language agnostic, all languages which were referencing the service can now understand , regardless of what it was built in. The SOAP conventions go over the wire, and carry the actual data in their requests/responses, whereas the WSDL just states the data types, formats for those requests/responses, take a look at this example WSDL file:

<?xml version="1.0"?>
<definitions name="Tutorial"             
		targetNamespace=http://Guru99.com/Tutorial.wsdl           
        xmlns:tns=http://Guru99.com/Tutorial.wsdl            
        xmlns:xsd1=http://Guru99.com/Tutorial.xsd           
        xmlns:soap=http://schemas.xmlsoap.org/wsdl/soap/
        xmlns="http://schemas.xmlsoap.org/wsdl/"> 
   <types>    
   		<schema targetNamespace=http://Guru99.com/Tutorial.xsd    
        xmlns="http://www.w3.org/2000/10/XMLSchema">
        
        <element name="TutorialNameRequest">       
        	<complexType>         
            	<all>            
                	<element name="TutorialName" type="string"/>        
                </all>      
            </complexType>     
       </element>    
       <element name="TutorialIDRequest">       
       		<complexType>           
            	<all>           
                	<element name="TutorialID" type="number"/>         
                </all>      
            </complexType>     
       </element>    
    </schema>
 </types>  
 <message name="GetTutorialNameInput">   
 	<part name="body" element="xsd1:TutorialIDRequest"/>  
 </message> 
 <message name="GetTutorialNameOutput">  
 	<part name="body" element="xsd1:TutorialNameRequest"/>
 </message> 
 <portType name="TutorialPortType">  
 	<operation name="GetTutorialName">    
    	<input message="tns:GetTutorialNameInput"/>     
        <output message="tns:GetTutorialNameOutput"/>   
    </operation>  
  </portType> 
  
  <binding name="TutorialSoapBinding" type="tns:TutorialPortType">  
  <soap:binding style="document" transport="http://schemas.xmlsoap.org/soap/http"/>  
 	 <operation name="GetTutorialName">    
  		<soap:operation soapAction="http://Guru99.com/GetTutorialName"/>   
        	<input>   
            	<soap:body use="literal"/>   
            </input>  
        <output>      
   <soap:body use="literal"/>   
 </output>   
 </operation>  
 </binding>  
 
 <service name="TutorialService">   
 	<documentation>TutorialService</documentation>  
    <port name="TutorialPort" binding="tns:TutorialSoapBinding">     
    	<soap:address location="http://Guru99.com/Tutorial"/>
    </port>
 </service>
</definitions>

It doesn’t mention any specific data, just the data it expects. It establishes the ports, the URL which will be the main endpoint, as SOAP is a standard which can work outside of the HTTP layer.

WADL offers the same sorts of ideas as WSDL , but for massive REST APIs . Organisations that have many APIs will want to consolidate a standard between all the ones which are in frequent communication with each other, and so if we , the pen tester , were to get the WADL files - which effectively states the contracts between all fundamental parts of the infrastructure then we can very quickly figure out where the gold is.

Here is an example WADL file

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<application xmlns="http://research.sun.com/wadl/2006/10">
    <doc xmlns:jersey="http://jersey.dev.java.net/" 
    		jersey:generatedBy="Jersey: 1.0-ea-SNAPSHOT 10/02/2008 12:17 PM"/>
    <resources base="http://localhost:9998/storage/">
        <resource path="/containers">
            <method name="GET" id="getContainers">
                <response>
                    <representation mediaType="application/xml"/>
                </response>
            </method>
            <resource path="{container}">
                <param xmlns:xs="http://www.w3.org/2001/XMLSchema" 
									type="xs:string" style="template" name="container"/>
                <method name="PUT" id="putContainer">
                    <response>
                        <representation mediaType="application/xml"/>
                    </response>
                </method>
                <method name="DELETE" id="deleteContainer"/>
                <method name="GET" id="getContainer">
                    <request>
                        <param xmlns:xs="http://www.w3.org/2001/XMLSchema" 
													type="xs:string" style="query" name="search"/>
                    </request>
                    <response>
                        <representation mediaType="application/xml"/>
                    </response>
                </method>
                <resource path="{item: .+}">
                    <param xmlns:xs="http://www.w3.org/2001/XMLSchema" 
											type="xs:string" style="template" name="item"/>
                    <method name="PUT" id="putItem">
                        <request>
                            <representation mediaType="*/*"/>
                        </request>
                        <response>
                            <representation mediaType="*/*"/>
                        </response>
                    </method>
                    <method name="DELETE" id="deleteItem"/>
                    <method name="GET" id="getItem">
                        <response>
                            <representation mediaType="*/*"/>
                        </response>
                    </method>
                </resource>
            </resource>
        </resource>
    </resources>
</application>

I hope this file makes a bit more sense as it is more web-centric, but all it’s doing is defining all the endpoints that this service will communicate on, and what sorts of data it expects.

Most projects embarked on by companies won’t really use a SOAP standard, but just a strong REST API instead that they wrap with types etc. Modern day projects will add some other forms of documentation and clarity, most cases not necessarily being for other services and clients (where they are expected to abide by a standard) but for the developers themselves that rely on structure to maintain such large applications. Documentation frameworks include Swagger, Apiary, and RAML for example. Access to a Swagger document provides testers with a good view of how the API works and thus how they can test it

Moving away from this now, we turn to the more general Software Development Kits (SDKs). These are the sets of libraries which compile, say , Java code into its bytecode representation and make it a lot easier for us to write programs, as without them we would have to write all the procedures which otherwise are in the standard libraries etc. All we would have is the compiler, but that doesn’t mean we’ve really made any progress. For developing Android apps there is the Android SDK which contains all the libraries necessary to interface the functionality of the phone itself, and there are many different types of Android phones, so a single function that we would want to write without it would have to know all the conventions of the different types of phones… yeh no chance. Documentation on the SDK in use can give us a clue to the platforms, the documentation may show us the versioning, or functionality that got introduced at a certain level anyway. Organisations may either create their own SDKs or use commercial or open-source SDKs. Understanding which SDKs are in use, and where, can help a penetration tester test applications and services. Say we were trying to go after an old Windows 7 machine, which has the Windows 7 SDK which allows developers to more easily run OS calls, what if we had a look at the public documentation online and found that the cryptography library for this SDK is out-of-date and hence susceptible to attack. Then we could use this vulnerability to decrypt registry keys and from there it gets real bad…

Internal documentation which stretches across all technical aspects of the corporate infrastructure, like sample applications and their requests, old APIs , testing environments and other code which may prove useful. Architectural diagrams are pretty useful, as they show the relations between each section of the software and gives you a good idea of the layout. Data flow diagrams are usually used to show where a system, app or function gets in data from - and because their use is so widespread they range in complexity but here’s an example:

dataflow-diagram

Lastly, configuration files are incredibly valuable things to get from an application or server as it gives you the setup loaded for that target, which in the case of a white-box test would speed things up a lot quicker.

Accounts and Access

Moving onto the side of a penetration test that looks at compromising accounts and gaining access, the skills required to accomplish this can come down to what type of type it is - white box, black box etc. If it is a black box test, it becomes a matter of trial and error to establish what the conventions of a system are,and whether they use whitelists or blacklists on their systems. For white box assessments they want to see the quality of compromised infrastructure so they would usually whitelist attackers for things like IPSs , WAFs etc. This allows them to constantly perform tests that try to either hack internal computers or retrieve customer information from a database.

Security exceptions at the network layer, such as allowing testers to bypass network access controls (NACs) that would normally prevent unauthorised devices from connecting to the network.

Bypassing or disabling certificate pinning.

Access to user accounts and privileged accounts can play a significant role in the success of a penetration test. White box assessments should be conducted using appropriate accounts to enable testers to meet the complete scope of the assessment. Black box tests will require testers to acquire credentials and access. That means a strong security model may make some desired testing impossible—a good result in many cases, but it may leave hidden issues open to insider threats or more advanced threat actors.

Physical access to a facility or system is one of the most powerful tools a penetration tester can have. In white box assessments, testers often have full access to anything they need to test. Black box testers may have to use social engineering techniques or other methods we will discuss later in this book to gain access.

Network access, either on site, via a VPN, or through some other method, is also important, and testers need access to each network segment or protected zone that should be assessed. That means that a good view of the network in the form of a network diagram and a means to cross network boundaries are often crucial to success.

Budget

Technical considerations are often the first things that penetration testers think about, but budgeting is also a major part of the business process of penetration testing. Determining a budget and staying within it can make the difference between a viable business and a failed effort. The budget required to complete a penetration test is determined by the scope, and each section specified, rules of engagement (or, at times, vice versa if the budget is a limiting factor, thus determining what can reasonably be done as part of the assessment!).

For internal penetration testers, a budget may simply involve the allocation of time for the team to conduct the test. For external or commercial testers, a budget normally starts from an estimated number of hours based on the complexity of the test, the size of the team, and any costs associated with the test such as materials, insurance, or other expenditures that aren’t related to personnel time.

1.2 Explain Key Legal concepts

Please please for the love of God don’t start a pen test without the client’s written signature stating that they have given you permission to start working test parts of their infrastructure - and giving the go-ahead for all specified (scoped) targets. This in legal terms is called a Statement of Work (SOW) and it is a legally binding document, which saves you from any national legislature on trespassing/unlawful testing.

Now onto the actual contract itself, this is called a Master Service Agreement (MSA) aims to become the framework for which the mundane legal terms are ironed out, so any future contracts take a lot less time to formalise. It should include:

  • How clients are going to pay, your salary/wage amount , whether in one go or monthly
  • If someone needs to go to court, where are we going to court, under what county laws etc.

Usually we need the MSA first before we sign off any SOWs, as these are meant to be “per-job” contracts, whereas an MSA is the litigious binding of two entities, the only thing happening before an MSA would be meetings to establish what work they want done, and how extensive their pen test is supposed to be etc…

Another common agreement is the Non-Disclosure Agreement (NDA), and this is a bilateral agreement - meaning it is to be upheld and respected by both parties, in the sense that the tester will not reveal his work to anyone outside of the company, and the company will not leak any of the intellectual property of the tester company conducting the examinations.

Differences in environment , and the differences in law

Because each state now has its own rules for penetration testers, you need to stay alert to the nuances of local law. For example, in some states they have made it illegal to enter any computer you don’t have “formal” access to, but this is a problem as we may find ourselves needing to pivot systems, and simulate the threats of a real malicious actor. If our contracts aren’t exhaustive we may find ourselves in court.

Organisations now doubt have policies that differ by department, and corporations of different fields will have entirely different industrial standards etc.

Export restrictions , restrictions on shipments, transfer of technology, or services outside the US - see US State Department resource.

So remember , written authorisation by the owner of the resources is so important, building a good rapport with those who you will be in regular communication with is next, and then the type of test itself will determine how you treat people like employees , customers as they may become inadvertent victims. It gets more fuzzy when third-parties are brought into the mix, as we have to go to there head offices, find out the contracts between the two and see whether the provider is comfortable. With AWS infrastructure for example it is important we get permission, but I’m pretty sure they conduct their own penetration test, and any standards with which they need to conform to they meet themselves. Other companies, such as your ISP , may also need to be alerted depending on the types of tests you’re issuing.

Due to this we may find ourselves in possession of some sensitive information, and we need this “get out of jail free” card, and it is only with this agreement really that we could claim to be a white hat.

1.3 Explain the importance of scoping an engagement properly

Scope Vulnerabilities

Scheduling is a big part of scope, as when we demarcate each section we usually attach a time-limit , a complexity scale to each section, and we have to answer these questions:

  1. When can/should the test(s) be run?
  2. Who should be notified?
  3. When must tests be completed? Do we start in the morning at 8 , and expect by 5 they should all be done? A reasonable client will always be aware of project lags , and maybe the solution is just as simple as allotting an extra day, but some clients may be more hasty.

Scope creep is pretty common in nearly all projects , but don’t give into demands! Clients may request additional tasks after the SOW is signed, you’re not liable for this, and they may seem “doable” but just as we’ve had to extend our previous task, what makes us think this will go any smoother? Money. We have to renegotiate terms if we’re going to accept extra work, and again , you need to get another SOW signed off. Adding extra work adds to your core SOW tasks, so by weary.

Project Strategy and Risk

Depending on whether we’re a black box or a white box will mean the sheer number of exploits we have to use will go up if we’re the former, as we get past hurdle after hurdle, but the benefit of being the latter is that we can get to the meat and potatoes. For example, if web servers, file servers etc use a blacklist system, as a black box tester you may not know this and so you become more aggressive in your attempts, as you have given access and so continue to utilise it, but then you find that your IP is getting blocked, or your account locks out, then you realise only then that they use blacklists - and you’ve been added. White-box attackers would know this, and so they would be more subtle to begin with. Likewise if they’re whitelisted you may need to use an exploit that adds you to the whitelist, maybe you steal a cookie etc…

The same goes for considering other infrastructure, how are we going to bypass the WAF, how are we going to manage the Network Access Control that may be present on things like employee websites etc. If the certificate on this website isn’t pinned then we could use a MITM attack, but if it is - meaning the client keeps note of the public key of that website , checking the public key stored with the one given , it would mean this avenue is a dead end .

We need to explore company policies to gauge the security considerations.

Risk Acceptance

risk-acceptance

Threat Actors

What type of role will be assuming? This affects the parts of the infrastructure which we shall target, and the type of threat actors differentiate themselves primarily by skills and by motivations:

  • Script Kiddie. This is the lowest level of threat actor, stealing scripts from the bigger boys to run attacks for no real cause, just reputation really. This would be akin to the pen tester running a black box test, with little complexity required - they just want to see how much someone who knows nothing can forge.
  • Hacktivist. This is someone who tends to hack for political causes, and can range from reasonably skilled to very skilled - though they don’t tend to be in it for money , but they will aim to take down an organisation’s infrastructure. This would be anywhere from black to grey.
  • Insider threats are incredibly dangerous, not because they necessarily have skill but because of how much they know. It is incredibly easy for them to do damage, especially because they were thought to be trusted. So here a white box test would be appropriate and the aim is to see if business practices and business infrastructure are decomposed enough and things like division of labour, separation of tasks etc have been executed well enough to minimise insider damage.
  • APTs (Advanced Persistent Threats). This is where we get into the organised crime gangs and the nation states, which are the high level attackers, and they will put a lot of time and effort into understanding the company/government and launch sophisticated, crafty attacks to try and “persist” on the target server(s). It is recommended that the pen tester has a white box level of knowledge.

The capabilities of the pen tester have to come into question here, because realistically do they have the same horsepower as a nation state? Definitely not. But they have the information, and they have time, so their expertise compiles and it is enough to give a good idea , but always incomplete.

If you imagine you’re an actor getting into a role, you need to learn to think like each of these threat actors, like:

  • What’s their intent? Power / revenge if they’re an insider that got fired - but then how would this group think, what avenues would they use, how far would they go? They may do it for status / validation like a script kiddie or are they in it for the money? Hacktivists do it purely for an ideological gain so they will complete their goal via any means necessary, and they seem to be the only group that will give themselves up for the successful completion of an attack.
  • This brings me to my next point, which is how would attacks categorise these assets? After you , the tester, have identified and ranked pertinent threats (using the lens of the threat actor) which of them would this particular actor go after first?

1.4 Explain the key-aspects of compliance-based assessments

Aside from the internal-based tests - like goal-based , red-team etc there may be the need for pen testers to comply with external regulation and it can greatly impact the requirements , practices and end product. For example, the rules to complete a penetration test may already be set by the compliance standard: as is the case with the PCI DSS (Payment Card Industry Data Security Standard). It states that penetration tests should include: the entire external, public-facing perimeter as well as the LAN-to-LAN attack surfaces. This sort of edges towards more of a black-box test, as it implies what’s most important is for the company to be able to thwart rogue attackers at the surface level. PCI DSS provides specific guidance for penetration testing here:

Whilst pre-engagements and scope are important, the PCI DSS goes a step further and advises testers on what to do in certain scenarios - such as when passwords are discovered during an assessment (this comes under data handling requirements). We can see that the pen tester is almost delegated to some sort of health and safety inspector, as all their doing is checking a business is conducting all the relevant compliance procedures , and its not always so much with the unique testing or brilliant security recommendations. On the one side it makes our job much easier, on the down side it makes our job more tedious.

It’s incredibly rare to never have third parties in the back of your mind when conducting a test, especially since everything is so interconnected now, so being able to conduct proper data isolation is important and being able to isolate the data covered by a compliance scan from other elements of an organisation’s infrastructure. Scoping down to find this “unit of infrastructure” which is where the compliance scan will be ran is good, but we also need to think about how that section relates to the rest of the organisation’s infrastructure, and how the effects of this unit could cause outages, ransomware infiltration etc in other parts.

Modern day legislation states that any company processing information needs to keep it encrypted and there need to be proper procedures in place to protect the keys that encrypt incoming raw data. To pass such a standard like the US federal government’s Federal Information Processing Standard (FIPS), the tester will need to isolate data as best as possible and maintain a high degree of data integrity. For third party systems, they often run their own tests so companies don’t have to, AWS for example more than happy to show compliance documentation for FIPS 140-2 on request.

Limited network access and limited storage access are also common in compliance driven assessments. PCI DSS–compliant organisations have often isolated their card processing systems on a separate network with distinct infrastructure, which means that access to the environment via the network and the ability to access storage or other underlying services may be highly restricted. Penetration testers need to understand both the environment they will test and any functional or business limitations they must respect when testing in restricted compliance environments.

I cannot state how important it is that you completely understand a company in-depth, and that you can see their infrastructure has/hasn’t separated their HIPAA infrastructure (healthcare information) from its payments data (PCI DSS information). The degree to which the company has decomposed their own setup will dictate the degree to which you can scope, and hence test each standard independently.

What Is “Compliant”? In some cases, compliance-based assessments can be easier to perform because they have specific requirements spelled out in the regulations or standards. Unfortunately, the opposite is often true as well—legal requirements use terms like best practice or due diligence instead of providing a definition, leaving organisations to take their best guess. As new laws are created, industry organisations often work to create common practices, but be aware that there may not be a hard and fast answer to “what is compliant” in every case

Individual Standards and how we should tackle them

While there are many laws and standards that you may be asked to assess against as part of a compliance-based test, a few major laws and standards drive significant amounts of penetration testing work. HIPAA, GLBA, SOX, PCI-DSS, and FIPS 140-2 each have compliance requirements that may drive assessments, making it important for you to be aware of them at a high level. HIPAA, the Health Insurance Portability and Accountability Act of 1996, does not directly require penetration testing or vulnerability scanning. It does, however, require a risk analysis, and this requirement drives testing of security controls and practices. NIST, the National Institute of Standards and Technology, has also released guidance on implementing HIPAA, which includes a recommendation that penetration testing should be part of the evaluation process. Thus, HIPAA-covered entities are likely to perform a penetration test as part of their normal ongoing assessment processes.

GLBA, the Gramm-Leach-Bliley Act, regulates how financial institutions handle personal information of individuals. It requires companies to have a written information security plan that describes processes and procedures intended to protect that information, and covered entities must also test and monitor their efforts. Penetration testing may be (and frequently is) part of that testing methodology because GLBA requires financial institutions to protect against “reasonably anticipated threats”—something that is easier to do when you are actively conducting penetration tests.

SOX, the Sarbanes-Oxley Act, is a US federal law that set standards for US public company boards, management, and accounting firms. SOX sets standards for controls related to policy, standards, access and authentication, network security, and a variety of other requirements. A key element of SOX is a yearly requirement to assess controls and procedures, this potentially driving a desire for penetration testing.

FIPS 140-2 is a US government computer security standard used to approved cryptographic modules. These modules are then certified under FIPS 140-2 and can be assessed based on that certification and the practices followed in their use. Details of FIPS 140-2 can be found [here]( https://csrc.nist.gov/Projects/ Cryptographic-Module-Validation-Program/Standards). There are many other standards and regulations that may apply to an organisation, making compliance-based assessments a common driver for penetration testing efforts. As you prepare to perform a penetration test, be sure to understand the compliance environment in which your client or organisation operates and how that environment may influence the scope, requirements, methodology, and output of your testing.